2 Min Read

Hyper-Realistic AI Video Is Outpacing Our Ability to Label It

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

The age of synthetic content has arrived. And if you spent any time this week on X, TikTok, or YouTube, you might not now even know what was real.

In Episode 151 of The Artificial Intelligence Show, I talked to Marketing AI Institute founder and CEO Paul Roetzer about a rapidly escalating concern: the explosive rise of hyper-realistic AI-generated videos, especially those created with Google's new Veo 3 model.

With social platforms racing to implement transparency policies, one hard truth is becoming clear:

The technology is outpacing the tools meant to contain it.

Veo 3 Has Changed the Game

Veo 3 from Google DeepMind isn't just impressive. It's terrifyingly good. We're already seeing it flood social media with videos that are nearly indistinguishable from real footage. And in most cases? There's no label, no warning, and no obvious way for the average viewer to tell the difference.

"When I started seeing the Veo 3 videos, I was like, there's no way people are gonna have any clue this is AI," says Roetzer.

And while some platforms like TikTok and YouTube have rolled out new disclosure systems for AI-generated content, Roetzer says the current infrastructure is nowhere near ready.

Platform by Platform: The State of Disclosure

We reviewed what the major platforms are doing, and the picture is inconsistent at best:

  • TikTok: Uses auto-labeling via Content Credentials from the Coalition for Content Provenance and Authenticity (C2PA). But adoption is limited and inconsistent. If TikTok determines your content is AI, it may apply the label automatically—and you can't dispute or remove it.
  • YouTube: Requires creators to self-disclose if their content has been synthetically altered in ways that could mislead. However, there's little evidence that tools like DeepMind's SynthID are being used directly in the platform, despite both being owned by Google.
  • Meta: Offers guidance for labeling, but doesn’t mandate or enforce auto-labeling in most cases. The system relies heavily on user compliance.
  • X: Has a vague policy about inauthentic content and synthetic media, but offers no reliable labeling system. Roetzer noted that the most convincing AI-generated content he’s seen is showing up on X—and it’s rarely tagged.

Meanwhile, the C2PA and SynthID watermarking initiatives sound promising in theory. In practice? Roetzer says they're not widely adopted, especially by the AI labs producing the most advanced content. And unless platforms integrate these detection tools directly, they won’t help everyday users.

A Growing Mistrust

The result? Viewers are entering a world where everything looks real and nothing can be trusted.

Roetzer shared how he's now reflexively skeptical of any video he sees online—even those posted by verified sources. After seeing a drone attack video from Ukraine, his first instinct was skepticism. Only after verifying it with trusted media did he believe it was authentic.

"I've kind of arrived at that point where I just doubt everything until I verify it's real," he says.

That level of default mistrust may become the norm.

The Bigger Problem

Even if platforms eventually implement full detection and labeling capabilities, we're still faced with a structural issue:

The creators of these models aren't consistently building detection into their tools.

Google does have SynthID. But unless it's fully integrated into platforms like YouTube and X, it's not helping nearly as much as it could. C2PA has admirable goals, but without buy-in from major labs like OpenAI or Runway, its impact remains limited.

Until that changes, social platforms will remain reactive. And the average user will be left playing a losing game trying to determine what's real and what's not.

What Needs to Happen Now

This isn’t just about transparency. As Roetzer pointed out, these platforms are where billions of people get their news, form opinions, and engage with the world.

If we can’t clearly and consistently mark what’s real and what’s AI-generated, we risk undermining the foundation of shared reality.

The solutions aren’t simple. But the urgency is.

As Roetzer wrote on X:

“It seems irresponsible at this point to not publicly tag them on social media." 

Related Posts

How Does Google Treat AI Content? One Marketer Sure Found Out

Mike Kaput | November 7, 2022

Marketer Neil Patel just found out how Google treats AI content—and the results weren’t pretty.

Will Artificial Intelligence Take Our Jobs?

Mike Kaput | October 26, 2022

Is artificial intelligence going to automate all our jobs away—or help us do better work than ever? Well, it depends who you talk to…

How to Take the Guesswork Out of Content Marketing With AI

Paul Roetzer | January 7, 2020

Ceralytics is a content intelligence platform that uses AI to predict which content you should create next. We spoke with Ceralytics to learn how it works.