In a dramatic move, President Donald Trump has fired Shira Perlmutter, the head of the US Copyright Office, just days after her office released a 113-page report that questions whether AI training on copyrighted material qualifies as fair use.
And, make no mistake, the stakes are very real. Because at the heart of this firing is a growing battle between the rights of creators and the ambitions of the world’s most powerful tech companies.
To break it all down, I turned to Marketing AI Institute founder and CEO Paul Roetzer on Episode 148 of The Artificial Intelligence Show. And he made one thing clear:
This report isn’t just a legal memo. It’s a potentially game-changing document. And the tech world knows it.
Let’s start with what the Copyright Office's report actually said.
The report argues that training AI models on copyrighted content may not qualify as fair use, which is how authorities determine whether or not to take action to defend copyrights.
That is a direct rebuke to the arguments that companies like OpenAI, Meta, and Google have been making for years about why it's OK they use copyrighted material to train their models.
According to Roetzer, the firing of Perlmutter less than 24 hours after the report’s release isn’t a coincidence. While the report isn’t legally binding, it can absolutely be cited in court. And it gives massive credibility to the argument that AI companies violated copyright law by scraping the internet to train their models.
“We view this argument as mistaken,” the report says of the idea that AI training is inherently transformative and therefore protected under fair use.
A few other big points jump out of the report:
As such, the report strikes right at the heart of the tech industry's position on AI copyright.
"The US Copyright Office just laid out the argument that could go against all of that," says Roetzer. "That's why they got fired."
It's important to understand context here, says Roetzer. And the context is that AI companies have known from the start that their use of copyrighted material for model training has been problematic.
“All the AI labs building these models absolutely used copyrighted materials,” he says.
“They knew it was a legal gray area, and most likely illegal at the time based on current US law.”
But the calculus was simple: the risk of lawsuits was worth it if it meant dominating a trillion-dollar industry. Even if they ended up losing in court, the thinking was: who cares? By then, the game would be over. The winners would already be crowned.
This is part of why ChatGPT’s 2022 launch was such a watershed moment. It forced other AI labs to accelerate their own model releases, knowing full well that they’d be relying on the same legally risky data practices.
Ever since, they've been relying on the argument that their use of copyrighted material falls under fair use to cover the tracks.
The tech industry’s reaction? Alarm.
And the White House’s response? Swift.
Firing Perlmutter—and also removing the Librarian of Congress who appointed her—signals that this wasn’t just about policy. It was a raw political play to change the narrative before it ends up in the courts.
But if that was the plan, it may have backfired.
According to The Verge, the officials now installed in these roles may be even more skeptical of Big Tech’s copyright arguments. They’re not tech accelerationists—they’re anti-monopoly hardliners.
Either way, the firing reveals just how high the stakes are. The report is likely to be quoted in legal briefs currently being filed, if it isn't revoked or altered. Roetzer predicts it’s already been submitted as supporting evidence in multiple lawsuits. And with high-profile cases involving OpenAI, Meta, and others already underway, the Copyright Office’s stance could influence how judges think about everything from model training to output liability.
The most surreal part? There’s still no clear answer to the basic question: Is it legal to train AI on copyrighted content without permission?
And that uncertainty is still frustrating a lot of people.