<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=2006193252832260&amp;ev=PageView&amp;noscript=1">

3 Min Read

Ex-OpenAI Researcher Drops Bombshell AGI Predictions—And They're Terrifying

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

A series of explosive essays and interviews from former OpenAI researcher Leopold Aschenbrenner are sending shockwaves through the AI world with a chilling message:

AGI is coming this decade. And it could mean the end of the world as we know it.

Aschenbrenner, a one-time member of OpenAI's superalignment team, says he's one of perhaps a few hundred AI insiders who now have "situational awareness" that superintelligent machines smarter than humans are going to be a reality by 2030.

His mammoth 150+-page thesis, entitled "Situational Awareness: The Decade Ahead," outlines the evidence behind this jaw-dropping claim and paints an urgent picture of a possible future where machines outpace humans...

And it's a future very few individuals, companies, and governments are truly prepared for.

Not to mention, this unpreparedness could create serious problems for entire nations and economies, and international security itself.

What do you need to know about these bombshell predictions?

I got the inside scoop from Marketing AI Institute founder and CEO Paul Roetzer on Episode 102 of The Artificial Intelligence Show.

From whiz kid to whistleblower

First, some context on the man sounding the AGI alarm.

Aschenbrenner is a certified genius, for one. He graduated valedictorian from Columbia at age 19 (after entering college at 15) and worked on economic growth research at Oxford's Global Priorities Institute before joining OpenAI.

At OpenAI, he worked on the superalignment team run by AI pioneer Ilya Sutskever.

But that all unraveled in April 2024 when Aschenbrenner was fired from OpenAI for allegedly leaking confidential information. (He claims he simply shared a benign AI safety document with outside researchers, not sensitive company material.)

Regardless, the incident freed him up to now talk about all things AGI and superintelligence. And given his pedigree, the AI world is taking notice.

"This is someone who has a proven history of being able to analyze things very deeply and learn topics very quickly," says Roetzer. 

The topic itself also caught Roetzer’s interest because it aligns closely with a timeline he outlined for AI development recently. 

The road to superintelligence

The crux of Aschenbrenner's argument rests on something called scaling laws. 

These laws describe how, as we give AI models more computing power and make their algorithms more efficient, we see predictable leaps in their capabilities. 

By tracing these trendlines, Aschenbrenner says we'll go from the "smart high schooler" abilities of GPT-4 to a "qualitative jump" in intelligence that makes AGI "strikingly plausible" by 2027.

But it won't stop there. Once we hit AGI, hundreds of millions of human-level AI systems could rapidly automate research breakthroughs and achieve "vastly superhuman" abilities in a phenomenon known as an "intelligence explosion."

The trillion-dollar AGI arms race has begun

According to Aschenbrenner, the AGI race is already underway. 

He says that the "most extraordinary techno-capital acceleration has been set in motion" as tech giants and governments pursue acquiring and building the vast quantities of chips, data centers, and power generation infrastructure needed to build more advanced AI models.

He continues:

"As AI revenue grows rapidly, many trillions of dollars will go into GPU, datacenter, and power buildout before the end of the decade.”

But the runaway progress isn't without major risks. 

Aschenbrenner alleges AI labs are treating security as an "afterthought," making them sitting ducks for IP theft by foreign adversaries. 

Worse, he says superalignment—reliably controlling AI systems smarter than us—remains an unsolved problem. And a failure to get it right before an intelligence explosion "could be catastrophic."

The last best hope

To avoid this fate, Aschenbrenner calls for a massive government-led AGI effort.

No startup can handle superintelligence, he says. Instead, he envisions the U.S. embarking on an AGI project on the scale of the Apollo moon missions—this time with trillions in funding.

Doing so, Aschenbrenner argues, will be a national security imperative in the coming decade, with the very survival of the free world at stake.

“He says superintelligence is a matter of national security, which I agree with 100%,” says Roetzer. "If I were the US government, I would be aggressively putting a plan in place to spend trillions of dollars over the next five to ten years to house all the infrastructure in the United States.”

But the clock is ticking, Aschenbrenner says, to take this seriously and get it right.

A sobering warning  

While Aschenbrenner's predictions may sound far-fetched, Roetzer says we can't afford to ignore them.

“I know this is a lot, and it’s kind of overwhelming, but we all have to start thinking about these things,” he says. “We’re talking about a few years from now. We have to figure out what this means. What does it mean to government? What does it mean to business? What does it mean to society?”

Because if Aschenbrenner is even partially right, the future is coming faster than anyone is ready for.

Related Posts

Should You Expect Your Marketing Team to Read One Million Customer Reviews?

Mike Kaput | March 30, 2018

Understanding customer sentiment is critical for effective marketing, but it’s hard for marketers to read through thousands of customer reviews. Enter natural language processing for customer review analysis.

[The AI Show Episode 102]: Apple WWDC, Warnings of Superintelligence, and Adobe’s Controversial New Terms of Use

Claire Prudhomme | June 12, 2024

In Episode 102 of The Artificial Intelligence Show our hosts discuss Apple's AI plans at WWDC, former OpenAI researchers bold AGI predictions, and Adobe's new terms that sparked controversy.

New CB Insights Report Reveals Top AI Trends To Watch in 2018

Ashley Sams | February 22, 2018

Using a database of 1,000+ global AI companies, CB Insights released the top 13 AI trends their analysts will be watching in 2018. Here are the takeaways.