3 Min Read

Does ChatGPT Make You Dumber? What a New MIT Study Really Tells Us

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

A provocative new study out of MIT has ignited headlines claiming that ChatGPT might be harming your brain. But as is often the case with viral AI stories, the truth is far more nuanced.

On Episode 155 of The Artificial Intelligence Show, I broke down the study with Marketing AI Institute founder and CEO Paul Roetzer to determine what's worth paying attention to.

The Study at a Glance

Titled "Your Brain on ChatGPT," the study analyzed how different tools impact cognitive engagement during essay writing. Participants were split into three groups:

  • One wrote essays using only their memory ("brain-only")
  • One used a search engine
  • One used ChatGPT (GPT-4o)

Using EEG scans and linguistic analysis, researchers found that participants who relied on ChatGPT showed weaker neural connectivity and less engagement in memory and decision-making areas of the brain. Their essays were more uniform and less original. And they had more trouble remembering or quoting from their own writing—even just minutes after completing it.

But there’s a catch.

Why the Panic Is Premature

The paper, as AI expert Ethan Mollick points out, is being badly misinterpreted in viral posts and sensational headlines. He writes on LinkedIn:

"This new working paper out of the MIT Media Lab is being massively misinterpreted as "AI hurts your brain."

It is a study of college students that finds that those who were told to write an essay with LLM help were, unsurprisingly, less engaged with the essay they wrote, and thus were less engaged when they were asked to do similar work months later. It says something important about cheating with AI (if you let it do your work you won't learn) but it doesn't tell us anything about LLM use making us dumber overall.

This misinterpretation isn't helped by the fact that this line from the abstract is very misleading: “Over four months, LLM users consistently under-performed at neural, linguistic, and behavioral levels.” But the study does not test "LLM users" over four months, it tests (9 or so!) people who had an LLM help write an essay in an experiment writing a similar essay four months later.

To be clear this isn't a defense of blindly using AI in education, they have to be used properly to be effective. We know from this well-powered randomized controlled studies that just having the AI give you answers lowers test scores.

But that doesn't mean that LLMs rot your brain."

Roetzer agrees, pointing out that you can very quickly get a sense of Mollick's point if you actually read further about the study, rather than just consuming headlines.

"It's like saying we gave calculators to a control group who didn't know how to do math, and we found that people who relied on the calculator to do math didn't actually learn math," he says.

The real takeaway? If you use AI to bypass critical thinking, you’ll think less. That’s not a revelation—it’s just common sense.

From Research to the Real World

The study seems methodologically solid and points to a real effect. But it actually strengthens the case for responsible AI use, says Roetzer. In education and business, the real imperative is to teach people how to use AI tools to accelerate—not replace—learning and thinking.

That means:

  • Starting with human comprehension
  • Using AI to test, refine, or expand ideas—not to generate entire outputs blindly
  • Creating environments that reward cognitive engagement, not just finished deliverables

Roetzer also introduced a helpful framework to understand where there are cognitive gaps in AI outputs that human brains may need to be aware of and/or check:

  1. The Verification Gap: Humans need to fact-check AI outputs.
  2. The Thinking Gap: Humans have a limited capacity to critically evaluate AI-generated content.
  3. The Confidence Gap: Without truly engaging with the underlying material, discomfort arises when you present material created by AI but don’t fully understand it.

Together, these gaps explain a growing dynamic in the workplace:

As we produce more with AI, if we don't understand and address these gaps, we risk retaining and understanding less.

In Roetzer’s own experience, this dynamic plays out even in day-to-day meetings.

"I still type out everything in every meeting I go to," he says, rather than rely on an AI notetaker. "If I just have the notetaker, there's less cognitive load. But that cognitive load is actually what embed it in my memory."

The Bottom Line: Critical Thinking Still Matters

The MIT study, when read carefully, doesn’t say AI is inherently dangerous to our brains. It says lazy use of AI is. And that’s an important distinction.

So what’s the smart takeaway? Use ChatGPT. Use it often. But never outsource your brain. Because the skill that matters most in the age of AI is the same one that mattered before it: knowing how to think.

Related Posts

Microsoft Releases AI Copilot for Windows 11, Sets Date for Enterprise Release—But Will It Match the Hype?

Mike Kaput | September 29, 2023

Microsoft announced they have released their AI assistant, Copilot, to Windows 11 users and that it will be generally available to enterprise customers starting November 1.

The New York Times Sues OpenAI and Microsoft

Mike Kaput | January 9, 2024

The New York Times has sued OpenAI and Microsoft for copyright infringement. It’s a legal battle that will have huge implications for AI and media.

The Panicked Email That Sparked Microsoft's Billion-Dollar Bet on OpenAI

Mike Kaput | May 7, 2024

A newly unsealed email has shed light on one possible reason Microsoft made its initial billion-dollar bet on OpenAI.