A new study from MIT has sent shockwaves through the business world with a stunning claim: 95% of enterprise generative AI pilots are failing, delivering zero measurable return on investment.
The report, titled “The GenAI Divide,” suggests that despite a rush of $30 to $40 billion in enterprise spending, the vast majority of companies remain stuck, unable to extract real value from their AI initiatives. The finding quickly went viral, fueling a narrative that we’re in an AI bubble and the technology is massively overhyped.
But is that the real story?
To get past the explosive headline, I talked it through with Marketing AI Institute founder and CEO Paul Roetzer on Episode 164 of The Artificial Intelligence Show. He argues that a closer look at the study’s methodology reveals a very different picture.
When Roetzer first saw the 95% failure rate, his immediate reaction was skepticism.
“Anytime you see a headline like that, you have to immediately step back and say, okay, that seems unrealistic,” he says.
He points out that profound claims require profound evidence, and this study doesn’t stand up to scrutiny. After the report flooded his LinkedIn feed and came up repeatedly in live events, Roetzer did a deep dive into its methodology that led him to a firm conclusion:
“Please don’t put any weight into this study,” he warns. “This is not a viable, statistically valid thing.”
Here’s where the research falls apart, according to Roetzer:
The MIT study went viral not because it was right, but because it fit a convenient narrative. People who believe AI is an overhyped bubble saw the headline as proof.
It’s tempting to want data to support and validate what you believe to be true about AI, says Roetzer. And it’s easy to find data to support most perspectives.
“But we need to be a little bit more honest with the things that we use to make these cases,” he says.
He cautions that in the rush to be the first to share “breaking” news, many people skip the critical step of actually reading the research. Instead of jumping on the bandwagon, he advises taking a few minutes to read the methodology. Often, you’ll find that the data is being shaped to fit a pre-existing narrative or generate clicks.
While the study itself may be flawed, it serves as a valuable reminder for organizations to be strategic about their AI initiatives. Instead of getting bogged down in sensational headlines, Roetzer says companies should focus on the fundamentals of making pilot projects work.
He recommends a simple, practical approach:
The viral MIT study is a critical lesson in media literacy for the AI era. While it’s tempting to seize on data that confirms our biases, it’s far more valuable to think critically, question headlines, and dig into the methodology behind the claims.
The reality is that successful AI adoption isn't about chasing hype. It's about thoughtful planning, strategic implementation, and a clear-eyed view of how to measure success.
As Roetzer advises, before you share the next shocking AI statistic.
“Take three minutes and just read the methodology,” he says. You might find the real story is very different from the headline.