Nov. 22 (Reuters) – In front of the CEO of OpenAI Sam Altman’s four days in exileseveral staff researchers wrote a letter to the board warning of a powerful artificial intelligence discovery that they say could threaten humanity, two people familiar with the matter told Reuters.
Both sources said the unreported letter and the AI algorithm were key developments before the board’s ouster of Altman, the poster child for generative AI. Before his triumphant return As of Tuesday afternoon, more than 700 employees had threatened to quit and join sponsor Microsoft (MSFT.O) in solidarity with their sacked leader.
Sources cited the letter as one factor in a longer list of board grievances that led to Altman’s firing, including concerns about marketing advances before understanding the consequences. Reuters was unable to review a copy of the letter. The staff who wrote the letter did not respond to requests for comment.
After being contacted by Reuters, OpenAI, which declined to comment, acknowledged in an internal message to employees a project called Q* and a letter to the board before the weekend’s events, one of the people said. An OpenAI spokesman said the message, sent by longtime executive Mira Murati, alerted staff to certain media stories without commenting on their accuracy.
Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup’s pursuit of what’s known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as autonomous systems that outperform humans in the most economically valuable tasks.
Given the massive computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because the individual was not authorized to speak for the company. Although they were only performing math at the level of elementary school students, performing these tests made the researchers very optimistic about Q*’s future success, the source said.
Reuters was unable to independently verify Q*’s capabilities claimed by the researchers.
“VEIL OF IGNORANCE”
Researchers see mathematics as a frontier of generative AI development. Currently, generative AI is good at typing and translating language by statistically predicting the next word, and answers to the same question can vary widely. But conquering the ability to do math, where there is only one right answer, implies that AI would have greater reasoning abilities that resemble human intelligence. This could be applied to new scientific research, for example, AI researchers believe.
Unlike a calculator that can solve a limited number of operations, AGI can generalize, learn and understand.
In their letter to the board, the researchers noted the AI’s prowess and potential danger, the sources said without specifying the exact security concerns noted in the letter. There has long been a debate among computer scientists about the danger posed by highly intelligent machines, for example, if they might decide that the destruction of humanity was in their best interest.
The researchers also flagged the work of a team of “AI scientists,” the existence of which was confirmed by multiple sources. The group, made up of the combination of previous “Code Gen” and “Math Gen” teams, was exploring how to optimize existing AI models to improve their reasoning and ultimately perform scientific work, one of the people said .
Altman led efforts to make ChatGPT one of the fastest-growing software applications in history and attracted the necessary investment and IT resources from Microsoft to approach AGI.
In addition to announcing a slew of new tools at a rally this month, Altman told a summit of world leaders in San Francisco last week that he believed major breakthroughs were in sight.
“Now four times in the history of OpenAI, the most recent being in the last two weeks, I’ve gotten to be in the room, when we pull back the veil of ignorance and the frontier of discovery forward, and achieve- it is the professional honor of a lifetime,” he told the Asia-Pacific Economic Cooperation summit.
A day later, the board fired Altman.
Anna Tong and Jeffrey Dustin in San Francisco and Krystal Hu in New York; Editing by Kenneth Li and Lisa Shumaker
Our standards: The Thomson Reuters Trust Principles.