The age of synthetic content has arrived. And if you spent any time this week on X, TikTok, or YouTube, you might not now even know what was real.
In Episode 151 of The Artificial Intelligence Show, I talked to Marketing AI Institute founder and CEO Paul Roetzer about a rapidly escalating concern: the explosive rise of hyper-realistic AI-generated videos, especially those created with Google's new Veo 3 model.
With social platforms racing to implement transparency policies, one hard truth is becoming clear:
The technology is outpacing the tools meant to contain it.
Veo 3 from Google DeepMind isn't just impressive. It's terrifyingly good. We're already seeing it flood social media with videos that are nearly indistinguishable from real footage. And in most cases? There's no label, no warning, and no obvious way for the average viewer to tell the difference.
"When I started seeing the Veo 3 videos, I was like, there's no way people are gonna have any clue this is AI," says Roetzer.
And while some platforms like TikTok and YouTube have rolled out new disclosure systems for AI-generated content, Roetzer says the current infrastructure is nowhere near ready.
We reviewed what the major platforms are doing, and the picture is inconsistent at best:
Meanwhile, the C2PA and SynthID watermarking initiatives sound promising in theory. In practice? Roetzer says they're not widely adopted, especially by the AI labs producing the most advanced content. And unless platforms integrate these detection tools directly, they won’t help everyday users.
The result? Viewers are entering a world where everything looks real and nothing can be trusted.
Roetzer shared how he's now reflexively skeptical of any video he sees online—even those posted by verified sources. After seeing a drone attack video from Ukraine, his first instinct was skepticism. Only after verifying it with trusted media did he believe it was authentic.
"I've kind of arrived at that point where I just doubt everything until I verify it's real," he says.
That level of default mistrust may become the norm.
Even if platforms eventually implement full detection and labeling capabilities, we're still faced with a structural issue:
The creators of these models aren't consistently building detection into their tools.
Google does have SynthID. But unless it's fully integrated into platforms like YouTube and X, it's not helping nearly as much as it could. C2PA has admirable goals, but without buy-in from major labs like OpenAI or Runway, its impact remains limited.
Until that changes, social platforms will remain reactive. And the average user will be left playing a losing game trying to determine what's real and what's not.
This isn’t just about transparency. As Roetzer pointed out, these platforms are where billions of people get their news, form opinions, and engage with the world.
If we can’t clearly and consistently mark what’s real and what’s AI-generated, we risk undermining the foundation of shared reality.
The solutions aren’t simple. But the urgency is.
As Roetzer wrote on X:
“It seems irresponsible at this point to not publicly tag them on social media."