A new study from MIT has sent shockwaves through the business world with a stunning claim: 95% of enterprise generative AI pilots are failing, delivering zero measurable return on investment.
The report, titled “The GenAI Divide,” suggests that despite a rush of $30 to $40 billion in enterprise spending, the vast majority of companies remain stuck, unable to extract real value from their AI initiatives. The finding quickly went viral, fueling a narrative that we’re in an AI bubble and the technology is massively overhyped.
But is that the real story?
To get past the explosive headline, I talked it through with Marketing AI Institute founder and CEO Paul Roetzer on Episode 164 of The Artificial Intelligence Show. He argues that a closer look at the study’s methodology reveals a very different picture.
A Headline That Demands Skepticism
When Roetzer first saw the 95% failure rate, his immediate reaction was skepticism.
“Anytime you see a headline like that, you have to immediately step back and say, okay, that seems unrealistic,” he says.
He points out that profound claims require profound evidence, and this study doesn’t stand up to scrutiny. After the report flooded his LinkedIn feed and came up repeatedly in live events, Roetzer did a deep dive into its methodology that led him to a firm conclusion:
“Please don’t put any weight into this study,” he warns. “This is not a viable, statistically valid thing.”
Unpacking the Flawed Methodology
Here’s where the research falls apart, according to Roetzer:
- It has a narrow definition of success. The study defined success as “deployment beyond pilot phase with measurable KPIs” and an “ROI impact measured six month post pilot.” This narrow focus on direct P&L impact within just six months ignores many other critical ways AI delivers value.
- It ignores key metrics. The methodology didn’t seem to account for crucial business impacts like efficiency gains, cost reductions, customer churn reduction, lead conversion improvements, or sales pipeline velocity. Roetzer asks, “If you're going to say something has zero return, how can you do that without acknowledging all the other ways that AI can benefit a business?”
- It has some questionable sources of data. The finding of “zero return” was based on just 52 interviews that the report itself admits are only “directionally accurate based on individual interviews rather than official company reporting.” Furthermore, the report touts an analysis of over 300 public AI initiatives but never explains how that research was conducted or synthesized into the findings.
Why Bad Data Spreads So Fast
The MIT study went viral not because it was right, but because it fit a convenient narrative. People who believe AI is an overhyped bubble saw the headline as proof.
It’s tempting to want data to support and validate what you believe to be true about AI, says Roetzer. And it’s easy to find data to support most perspectives.
“But we need to be a little bit more honest with the things that we use to make these cases,” he says.
He cautions that in the rush to be the first to share “breaking” news, many people skip the critical step of actually reading the research. Instead of jumping on the bandwagon, he advises taking a few minutes to read the methodology. Often, you’ll find that the data is being shaped to fit a pre-existing narrative or generate clicks.
How to Actually Make Your AI Pilots Succeed
While the study itself may be flawed, it serves as a valuable reminder for organizations to be strategic about their AI initiatives. Instead of getting bogged down in sensational headlines, Roetzer says companies should focus on the fundamentals of making pilot projects work.
He recommends a simple, practical approach:
- Have a plan. Don’t just hand out tools like ChatGPT, Copilot, or Gemini and hope for the best.
- Personalize use cases. Give employees three to five specific use cases that help them get value from day one.
- Provide education and training. Treat AI adoption as a change management initiative, not just a technology rollout.
- Know how you’ll measure success. Success isn’t always about immediate P&L impact. Measure efficiency gains, productivity lifts, and other relevant metrics. A pilot’s success is rarely determined in just six months.
The Bottom Line
The viral MIT study is a critical lesson in media literacy for the AI era. While it’s tempting to seize on data that confirms our biases, it’s far more valuable to think critically, question headlines, and dig into the methodology behind the claims.
The reality is that successful AI adoption isn't about chasing hype. It's about thoughtful planning, strategic implementation, and a clear-eyed view of how to measure success.
As Roetzer advises, before you share the next shocking AI statistic.
“Take three minutes and just read the methodology,” he says. You might find the real story is very different from the headline.
Mike Kaput
As Chief Content Officer, Mike Kaput uses content marketing, marketing strategy, and marketing technology to grow and scale traffic, leads, and revenue for Marketing AI Institute. Mike is the co-author of Marketing Artificial Intelligence: AI, Marketing and the Future of Business (Matt Holt Books, 2022). See Mike's full bio.