Marketing AI Institute | Blog

The Alarming Rise of Nudify Apps and the Inability to Stop Deepfakes

Written by Mike Kaput | Nov 6, 2025 1:30:00 PM

A disturbing new wave of AI applications, often called “nudify” apps, is triggering major concern  as they spread rapidly across platforms such as Telegram and Discord.

These tools can generate realistic nude images of anyone, often focusing on women and teens, from a single photo without their consent. AI ethics researcher Rebecca Bultsma recently warned the technology is "cheap, instant, and targeting real people." She said she discovered more than 85 such sites in under an hour.

Prominent public figures are being targeted by this technology. Scientists such as Michio Kaku and Brian Cox said deepfakes have impersonated them to spread false claims on YouTube and TikTok. Astrophysicist Neil deGrasse Tyson even deepfaked himself claiming the Earth is flat, just to prove how convincing and dangerous the technology has become.

The problem seems to be outpacing our ability to control it. To understand the scale of this threat and why it’s so difficult to contain, I spoke with SmarterX and Marketing AI Institute founder and CEO Paul Roetzer on Episode 178 of The Artificial Intelligence Show.

Bypassing the Safeguards 

While major AI labs including OpenAI and Google can implement filters to block this content, Roetzer noted this is fundamentally a "platform distribution problem."

The core issue is that the technology to do this is already out in the wild.

“You are not going to stop AI models from being able to do these things,” he says.

That’s because powerful, closed models aren't required. Malicious actors can use smaller, open-source models and train them to create this content, completely bypassing the safeguards set up by big tech companies.

The Frightening Next Step: Deepfake Video

This isn't limited to static images. The next logical step is integrating these images with advanced video generation.

Imagine "Sora-like capabilities," Roetzer noted, where a user could "take someone, run it through the nudify app, extract the clothes, then upload it to a video platform and have it turned into a video of someone doing something they obviously never did.”

“It's horrendous,” he says.

The hard truth, said Roetzer, is that this technology appears here to stay.

“We can't stop it,” he says. “We have to have awareness about it. Schools have to be aware this is a problem. Parents have to be aware of this problem. Kids have to be aware. This is a problem. It is part of society now.”

As long as there is demand for such content, unfortunately, people will find ways to create it using the technology we now have available.

Awareness Is Your Best Tool

This wave of deepfakes marks a significant shift in our relationship with digital media.

“We've just entered a very different phase in society where the things we've always worried about being possible are now possible,” says Roetzer.

The greatest danger is that most of society still isn't aware of what AI can do. When they see a video or image online, their default assumption is that it's real.

With the technology itself impossible to contain, Roetzer says the only viable path forward is a massive public awareness effort.

“This goes back to that awareness and sort of pulling your peers, your family, your friends along and making sure everybody knows what's actually happening in the world,” he said.