Marketing AI Institute | Blog

A Fundamental Rethinking of How AI Learns

Written by Mike Kaput | Dec 4, 2025 1:30:00 PM

Ilya Sutskever, the former Chief Scientist of OpenAI and a central figure in the modern AI revolution, recently opened up about his new venture, Safe Superintelligence (SSI). In a rare interview on the Dwarkesh Podcast, Sutskever discussed his philosophy behind SSI, why the "age of scaling" is ending, and why he believes the path to superintelligence requires a fundamental rethinking of how AI learns.

I recapped the interview with SmarterX and Marketing AI Institute founder and CEO Paul Roetzer on Episode 183 of The Artificial Intelligence Show.

The End of "Scaling," The Return of Research

For the past five years, the dominant strategy in AI has been simple: bigger computers, more data, larger models. This "scaling hypothesis" gave us GPT-3, GPT-4, and the generative AI boom.

But according to Sutskever, that era is hitting a wall.

“Ilya says scaling the current way will keep leading to improvements but something important will continue to be missing,” says Roetzer.

Sutskever argues that pre-training data, the massive scrapings of the internet used to train LLMs, is limited. You cannot keep adding more text to get to superintelligence. Instead, the industry is moving back to an "age of research," where the focus must shift to reliable generalization and sample efficiency.

In other words, instead of building a model that has memorized the entire internet, SSI wants to build a model that learns like a human, capable of mastering new tasks quickly without needing to see billions of examples first.

Super Intelligence Bit by Bit

When SSI launched, its stated goal was a "straight shot" to superintelligence, implying they would work in secret for years and only release a final, safe product.

However, in the interview, Sutskever hedged on this promise.

“I think even in the straight shot scenario, you would still do a gradual release of it,” Sutskever said on the podcast. “Gradualism would be an inherent component of any plan.”

For Roetzer, this admission is significant.

“That is a variation of a straight shot to superintelligence,” says Roetzer. “We were told from the beginning, ‘We're not releasing anything until we're there.’ And now he's sort of hedging saying, ‘Yeah, maybe the safe way to do it is actually iterative deployment like OpenAI is doing.'”

This suggests that even the most safety-focused labs may be forced by market dynamics (or pragmatic testing needs) to release incremental products along the way.

What Exactly Is Superintelligence?

Perhaps the most fascinating part of the conversation was Sutskever’s explanation of the goal itself.

He isn't trying to build a model that knows how to do every job in the economy. He wants to build a model that can learn to do every job.

“The way, say the original OpenAI charter defines AGI is that it can do every single job,” says Roetzer. “You're proposing instead a mind that can learn to do every single job. And that is superintelligence.”

Once you have that learning algorithm, you deploy it into the workforce like a human employee. It learns on the job, gets better, and eventually surpasses human capability.

What’s Next?

Sutskever predicts this level of superintelligence is coming within five to 20 years.

The interview was dense and technical and quite enlightening. The man who helped architect the current generative AI boom believes the next leap won't come from bigger data centers, but from a smarter, more human-like learning process.

“He is obviously an extremely important figure in everything we're going through right now,” says Roetzer.