<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=2006193252832260&amp;ev=PageView&amp;noscript=1">

3 Min Read

What Really Happened at OpenAI?

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

Details are beginning to emerge about what happened at OpenAI—and it may be crazier than you think.

The company nearly imploded in the last couple weeks after its board fired CEO Sam Altman. (Followed by a mass revolt of staff over the firing.)

Now, Altman is back as CEO. There appears to be a new board. And co-founder Greg Brockman is also back.

(Have whiplash yet?)

What we still don't know is why the board suddenly fired Altman. Why was such drastic action taken—action that threatened to destroy the company?

Well, it turns it out it may have to do with a huge, scary AI discovery that OpenAI made this year.

On Episode 74 of The Marketing AI Show, Marketing AI Institute founder/CEO Paul Roetzer unpacked what might have gone on behind the scenes.

OpenAI may have made a major AI breakthrough.

Sam Altman said something curious a day before his firing. Something that went largely unnoticed in the OpenAI drama that followed.

He told an audience at the APEC Summit that he'd "been in the room" recently for a rare breakthrough. In that room, he had seen OpenAI "push the veil of ignorance back and the frontier of discovery forward.”

Roetzer suspects we need to know what Sam saw in that room to understand what came next.

Ahead of Altman's firing, several staff wrote a letter to OpenAI's board with a warning. The warning? A new AI discovery at OpenAI "that they said could threaten humanity," reports Reuters.

It may involve AI’s ability to reason and plan.

Sources report the breakthrough may be an OpenAI project called Q* (pronounced Q-Star). Q* is an AI model that can "solve math problems that it hadn’t seen before," says The Information. This is an important technical milestone in AI. Some researchers also see it as a precursor to artificial general intelligence (AGI).

Chief Scientist Ilya Sutskever led the breakthrough. He also led the effort to fire Altman. (Though he later expressed regret for it.)

For years, Sutskever has worked on getting models to solve reasoning tasks. These types of tasks include math and science problems. His team hypothesized that the way to do it was by giving models more time to respond to questions.

This is a concept called "test time computation." It means "if you give it more time to think and to work through reasoning capabilities, it actually seems to do it,” says Roetzer.

Test time computation is one way that major AI labs are trying to give AI planning capabilities.

Yann LeCun, one of the godfather's of AI, believes Q* is OpenAI's attempt to create AI that can plan.

He notes that OpenAI hired Noam Brown to work on this. Brown announced he was joining OpenAI on July 6, 2023. Brown's work (previously at Meta with LeCun) focused on AI planning and reasoning in games. That included using test time reasoning to improve AI capabilities. In his announcement, he said he'd be working on how to make these capabilities "truly general." He also said, if successful, it could result in something 1,000X better than GPT-4.

 

One day earlier, Sutskever announced OpenAI's "superalignment" team. The team's task is to make sure AI systems much smarter than humans follow human intent.

These events don't appear to be coincidental, says Roetzer.

“Noam Brown didn't leave Meta unless he knew OpenAI was working on this stuff and that there was a path to apply his research immediately.

“He's not leaving to go do this three to five years from now. So timing-wise, whatever this Q* program is, whatever the breakthrough is, probably happened earlier in 2023. That led to some major advancements and the creation of the superalignment team.”

And it may have led to Altman's firing.

Let's tie this all back together. Before Altman's firing, he mentioned an unspecified breakthrough. Around the same time, he also told The Financial Times that OpenAI was working on GPT-5.

Now, we hear about Q*, which may have powerful planning and reasoning capabilities. These capabilities are directly relevant to Brown's work and the superalignment team's mission.

It sounds possible that Q*—or part of what it can do—scared some people at OpenAI.

The Information says a demo of the model recently made the rounds within OpenAI. And that "the pace of development alarmed some researchers focused on AI safety."

“Now, part of the friction appears to be that Sam and Greg obviously were aware of this capability,” says Roetzer.

“It sounds like maybe they were not only not stopping it, but they were potentially building some of these capabilities into GPT-5 and that the GPT's release was actually meant to accelerate some of what was being developed within this Q* program,” says Roetzer.

Related Posts

[The AI Show Episode 84]: OpenAI Releases Sora, Google’s Surprise Launch of Gemini 1.5, and AI Rivals Band Together to Fight Deepfakes

Claire Prudhomme | February 20, 2024

Episode 84 provides insights on OpenAI's Sora for video generation, Google's Gemini 1.5, and tech giants' aim to regulate deepfakes with the C2PA standard.

[The AI Show Episode 83]: Google Bard Is Now Gemini, AI Agents Are Coming, and Sam Altman Seeks Trillions to Reshape AI

Claire Prudhomme | February 13, 2024

Episode 83 of The Artificial Intelligence show examines Google Bard's rebranding to Gemini, the inevitability of AI Agents, and Sam Altman's trillion-dollar request to reshape AI.

[The Marketing AI Show Episode 73]: OpenAI Fires Sam Altman: What Happened? And What Could Happen Next?

Claire Prudhomme | November 21, 2023

In this week's The Marketing AI Show, we come with a special episode detailing the history of OpenAI, its charter and the players that made the developments of the past weekend occur.