OpenAI just unleashed Sora 2, its most advanced video generation model yet, and dropped it into a new social app that looks and feels a lot like TikTok.
The technology is stunning. The model understands physics in a hyperrealistic way, making it feel less like a special effects tool and more like a true world simulator. On the feed, every single clip is AI-generated, and a new feature called “cameos” lets you drop your likeness (and your friends' likenesses) into any scene with just a short recording.
But in the rush to launch what some are calling the “ChatGPT moment for video,” OpenAI also kicked open a Pandora’s box of copyright infringement, deepfake concerns, and questions about the future of online content.
To make sense of this disruptive launch and what it means, I spoke to SmarterX and Marketing AI Institute founder/CEO Paul Roetzer on Episode 172 of The Artificial Intelligence Show.
From the moment you open the new Sora app, the interface is immediately familiar.
“Wait, did I open Instagram Reels?” Roetzer says he asked himself after getting access. “It looks exactly like Reels and TikTok. It's the same format, same scrolling mechanism. It's just all AI-generated.”
Thanks to Sora 2’s capabilities, these AI-generated videos are stunningly realistic and feature synchronized audio. According to OpenAI’s Sora 2 system card, the new model builds on the original Sora with capabilities like “more accurate physics, sharper realism, synchronized audio, enhanced steerability, and an expanded stylistic range.”
The standout feature seems to be those “cameos,” which allow users to record a short clip of themselves and then, with permission, use that likeness in AI-generated videos. According to OpenAI, the person whose likeness is used can revoke access at any time.
But it was the app’s public feed that immediately raised alarms.
Upon opening the app, Roetzer was greeted by a wall of intellectual property violations.
“It was just, here's your AI slop feed with all these Nintendo characters and Pokemon and South Park and SpongeBob Squarepants, Star Wars, everything,” he says.
Naturally, he decided to test the generation capabilities himself. He prompted the model to create a scene with Batman at a baseball game, with the Joker pitching. The result? An immediate rejection notice saying the content may violate OpenAI guardrails concerning the similarity to third-party content.
He tried again with Harry Potter. Same result.
This was just 48 hours after the app’s launch, and while OpenAI had seemingly implemented guardrails to block new creations, the feed was still flooded with copyrighted characters. It was a clear sign that OpenAI had launched first and was trying to clean up the mess in real-time.
“It is blatantly obvious that this thing is trained on an immense amount of copyrighted content, including shows, movies, and video games,” Roetzer says.
The public reaction was swift, with many critics labeling the app an “AI slop feed,” and one that raises some serious copyright concerns at that. Just a few days into the launch, with backlash mounting, OpenAI CEO Sam Altman published a blog post titled “Sora update #1.”
In the post, Altman acknowledged the feedback and announced two upcoming changes:
In particular, Altman mentioned the following:
“We have been learning quickly from how people are using Sora and taking feedback from users, rightsholders, and other interested groups. We of course spent a lot of time discussing this before launch, but now that we have a product out we can do more than just theorize.”
Roetzer finds that response to the “feedback” coming from critics unconvincing .
“You don't train a model on all this copyrighted stuff, allow people to output it, and not know that you're going to get massive blowback,” he said.
The legal risks aren’t just for OpenAI, either. According to IP attorney Christa Laser, who Roetzer consulted, individual users are also exposed. The short answer to whether users are at legal risk for generating copyrighted content is yes, unless OpenAI has licensing deals with the rights holders like Disney, that they sublicense to users.
If the legal and ethical minefield was so obvious, why did OpenAI charge straight into it? Roetzer believes it boils down to one thing: competition.
“The real reason they did this is for competition. Google got one up on them with Veo 3,” he says. “They had to just get out ahead of it and get it out there.”
OpenAI claims this is part of its “iterative deployment” strategy, or releasing tech into the world to see how people use it. But as Roetzer notes, nothing that happened in the first week was unpredictable. The company wanted a viral hit, got it to number one in the App Store, and is now dealing with the fallout.
The Sora 2 launch is a perfect microcosm of the current AI landscape: incredibly powerful technology is being deployed at breakneck speed, with safety, ethics, and legal frameworks struggling to keep up.
For creators, the implications are troubling. YouTuber Mr. Beast posted:
Meanwhile, some in the tech world have been dismissive of concerns about Sora 2. Venture capitalist Vinod Khosla called critics “ivory tower Luddite, snooty critics or defensive creatives.” Roetzer warned that this tone is dangerously divisive and alienates the very people whose work fuels these models.
For all the talk of AI-generated “slop,” OpenAI’s ambitions are much grander. As the company stated in its announcement, this is a step toward “general purpose, world simulators and robotic agents” that will “fundamentally reshape society.”
This may just be the beginning, but one thing is clear: the guardrails for AI-generated content are being built while the car is already speeding down the highway.