Marketing AI Institute | Blog

AI Teaching Itself? It’s Called “Recursive Self-Improvement” and It’s Coming

Written by Mike Kaput | Dec 9, 2025 2:16:49 PM

The growing reality that AI might be able to learn from itself was the buzz of Silicon Valley last week.

It started when former Google CEO Eric Schmidt spoke at Harvard about how the AI industry is rapidly approaching "recursive self-improvement,” a concept in which AI systems can learn and improve without human instruction. Schmidt said this could happen within two to four years and warned about the need for limits on this technology

About the same time, OpenAI launched a new Alignment Research Blog dedicated to the safety challenges of these self-improving systems. And a team of former Google DeepMind researchers announced a new startup, Ricursive Intelligence, founded to create a recurring loop between AI models and chip design.

To understand this technology and why it might tip us toward Artificial General Intelligence (AGI), I spoke with SmarterX and Marketing AI Institute founder and CEO Paul Roetzer on Episode 184 of The Artificial Intelligence Show.

Self-Learning that Changes Everything

Recursive self-improvement has long been discussed in the tech world as a primary sign of AI progress along with memory, reasoning, and multimodality.

“If an AI system gets good enough that it can meaningfully help design the next better version of itself, that loop keeps going,” says Roetzer.

The cycle would work like this: An AI proposes changes to its own architecture or training data. Those changes produce a more capable model. That new model is even better at proposing improvements. And so on and so on, infinitely.

This could create an intelligence explosion, but also risks of loss of human control.

“The danger comes when we start to rely less on the human in the loop,” says Roetzer.

To illustrate this, Roetzer applies the concept to a familiar domain: marketing. Imagine an autonomous AI agent running a campaign for a major event. It has access to all of your data, including email performance, ad buys, messaging, and budgets.

In a recursive scenario, the agent doesn't just suggest changes; it implements them instantly to optimize results. It rewrites emails, changes send times, alters ad creative, and reallocates budget, all without needing a human to direct it.

“The human is maybe completely uninvolved,” says Roetzer. 

As a result, in this environment, the disruption of jobs becomes much more likely, he says.

From Fast to Faster

Why does this matter right now? Because if research labs enable AI to learn from itself, the timeline for every other major AI milestone accelerates.

“You crack the code on how to do it for AI models, everything else falls,” says Roetzer.

The primary bottleneck in AI research today is human bandwidth. Labs such as OpenAI and Google might run a few hundred major experiments a year to improve their models. But AI that learns from itself could increase that number exponentially. 

“What if they can run 50,000 experiments next year?” Roetzer says. “That's what they're trying to do. And to do that, you need some element of this recursive self-improvement because there's no way you could hire as many humans as you want.”

With self-learning AI systems, AI could swiftly drive its own evolution, leaving humans out.

This increases the urgency to understand where this technology is heading and how it could impact work and society: Potentially accelerated timelines for AGI, faster disruption of knowledge work, and a new set of risks around safety.

No one knows how this will play out, but what seems certain is that the next few years will  move faster than the last few with AI.