3 Min Read

The AI Cheating Crisis in Higher Education Is Worse Than Anyone Expected

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

A bombshell report from New York Magazine has ignited a fierce debate in academia—and it’s shining an uncomfortable spotlight on the growing crisis of AI-powered cheating in colleges and universities.

On Episode 147 of The Artificial Intelligence Show, Marketing AI Institute founder and CEO Paul Roetzer unpacked the article’s disturbing revelations, offering both a sobering look at how bad things have gotten, and what it means for the future of education, hiring, and critical thinking in the age of AI.

A Generation Raised on ChatGPT

At the center of the article is a truth that many are having a hard time accepting:

For a growing number of students, using generative AI to complete assignments isn’t an exception. It’s the norm. From Ivy League halls to community college classrooms, students are increasingly offloading their cognitive labor to AI, including automating note-taking, summarizing readings, writing code, and even generating entire essays.

The use of AI isn't what's turning heads. It's how AI seems to be used more frequently, according to the article, to automate learning entirely.

One Columbia student flat-out admitted that AI wrote 80% of his coursework. Others use tools like ChatGPT to prepare for coding interviews or pump out last-minute essays with polished outlines and supporting points, all generated in seconds.

Overall, the article in New York Magazine paints the starkest picture yet of a student population that views AI not as a shortcut, but as the default path.

“College is just how well I can use ChatGPT at this point,” one student bluntly put it in the article.

The article is a wake-up call for parents, teachers, and administrators. But it's not just cherry-picking anecdotes, says Roetzer. This is a very real, very urgent trend happening everywhere.

"I think it's way bigger than most people realize," he says. "I've spent time with deans and provosts, and I'm not sure that the totality is being comprehended right now."

Teachers Are Overwhelmed—and Giving Up

Faced with this tidal wave, many educators are in open despair, according to the report. Some are attempting to fight back with AI detectors and Trojan horse phrases embedded in prompts. But most admit that detection tools are unreliable at best, and enforcement is practically nonexistent.

Professors have watched helplessly as writing assignments, once the bedrock of critical thinking, are completed by chatbots with no originality or student engagement. Some are retiring early. Others are being told to grade AI-written papers as if they were human work. One teaching assistant said his university's policy was to assume every essay was a “true attempt,” even if clearly machine-generated.

The article describes it as a "full-blown existential crisis" for teachers.

What Happens When They Enter the Workforce?

The article’s implications stretch far beyond the classroom, says Roetzer.

Today’s AI-reliant students are tomorrow’s employees. And if current trends hold, many will arrive in the workplace ill-equipped to handle roles that require independent thought, creativity, or even basic literacy.

"This is your employee base. This is your workforce of the future," says Roetzer. "They're going to come in having used all these tools, and you have to understand that and prepare."

He adds that employers must now consider a broader range of competencies: not just how well someone can use and prompt AI tools, but whether they can reason, write, and reflect without them.

"You should in your HR processes start looking for prompting abilities and the ability to work with these machines," he says.

"But you also have to actually figure out how to test for critical thinking skills with no devices. If you're conducting interviews over a computer, there's a reasonable chance these students are using AI while you're talking to them to answer you."

The same goes for internal AI policies. New hires will likely expect to use tools like ChatGPT at work, and might not even realize that unfettered usage could violate company rules. Organizations need to clarify AI policies from day one—and design interview processes that account for AI-native candidates.

The Big Question: Can We Fix It?

As the New York Magazine piece makes painfully clear, education has become increasingly transactional thanks to AI. Students appear to see no problem outsourcing their learning to AI. And teachers appear unable to stop it from happening.

Which, like it or not, puts the burden on parents to ensure that young people grow up seeing AI as an assistant, not a replacement.

"You have to understand that they have to be taught how to still think critically and be creative without always using it at a crutch," says Roetzer. "It needs to be there as an augmenting tool, not as replacement to these things."

In the meantime, the question remains:

If AI can now write your paper, pass your test, and prep you for your job interview—what exactly is left for you to learn?

The answer to it may define the future of both education and work.


Related Posts

AI Is Evolving From Thinking Fast to Thinking Slow: Why That Changes Everything

Mike Kaput | October 22, 2024

Sequoia Capital, one of tech's most influential venture capital firms, just dropped a major analysis of where generative AI is headed.

Meet the AI That Knows Your Emotions

Mike Kaput | April 2, 2024

We now have AI that can understand your emotions and respond with empathy.

The Panicked Email That Sparked Microsoft's Billion-Dollar Bet on OpenAI

Mike Kaput | May 7, 2024

A newly unsealed email has shed light on one possible reason Microsoft made its initial billion-dollar bet on OpenAI.