3 Min Read

Anthropic Wins a Major AI Copyright Battle

Featured Image

Serious about learning how to use AI? Sign up for our AI Mastery Membership.

LEARN MORE

Anthropic just scored a major win in court. And the implications could reshape the future of AI.

On June 24, 2025, U.S. District Judge William Alsup ruled that Anthropic's use of copyrighted books to train its large language model, Claude, qualified as "fair use." That marks the first time a federal judge has explicitly endorsed the fair use defense for generative AI training.

The ruling could provide a temporary shield for AI developers under fire for copyright infringement. But it’s not a free pass. The court also found Anthropic liable for downloading more than 7 million pirated books from shadow libraries. That part is headed to trial in December.

To make sense of this complicated but consequential moment, I spoke to Marketing AI Institute founder and CEO Paul Roetzer on Episode 157 of The Artificial Intelligence Show.

Fair Use: The Four-Part Framework

Judge Alsup’s decision hinges on a nuanced interpretation of fair use, a legal doctrine that allows unlicensed use of copyrighted works in specific situations. These situations include criticism, commentary, news reporting, teaching, and research. But how does that apply to AI?

According to the US Copyright Office, courts examine four factors to determine fair use:

  • Purpose and character of the use. Is it commercial or educational? Anthropic's use is clearly commercial, which typically weighs against fair use. But Judge Alsup emphasized the transformative nature of Claude’s training, a critical element we'll return to shortly.
  • Nature of the copyrighted work. How creative is the original content? Books, which are creative expressions, typically get strong copyright protection. Still, courts have historically granted more leeway if the new use significantly transforms the original.
  • Amount and substantiality of the portion used. How much was copied? And was it the "heart" of the work? Large language models often train on entire books, a sticking point in other lawsuits. But Alsup viewed the act of learning from the books—not replicating them—as key.
  • Effect on the market. Does the AI reduce demand for the original work? Alsup didn’t see evidence that Claude’s outputs cannibalized book sales. Instead, he likened AI training to a writer learning from other writers, not copying them.

Alsup called Claude’s training process "quintessentially transformative," a term with heavy legal weight. In copyright law, a use is transformative if it adds new meaning or purpose, rathe than just repackaging the original.

"Essentially, the more a new work transforms the original, the more likely it is to be considered fair use," says Roetzer.

That framing may now influence other courts evaluating AI copyright cases. Anthropic, OpenAI, Meta, and others have long argued that training data isn't reused verbatim. Instead, it's analyzed, abstracted, and recombined to create novel outputs. Alsup's ruling gives some legal backing to that claim.

But Piracy Still Crosses the Line

While the ruling supports the concept of transformative fair use, it draws a hard line on sourcing. Anthropic admitted to downloading millions of pirated books, and the judge said unequivocally: That part is not fair use.

Alsup said that Anthropic had no entitlement to use pirated copies for training and rejected the company’s claim that the source of the data didn’t matter.

That distinction could have wide-reaching implications.

As Roetzer noted, courts could impose up to $150,000 in damages per infringement. If applied to 7 million pirated books, that math becomes an existential threat to a company like Anthropic.

What We Still Don’t Know

Crucially, the court only ruled on the legality of inputs (training data), not outputs (AI-generated content). Whether Claude or other AI systems can legally produce text that resembles copyrighted works remains unresolved.

The ruling also leaves the door open for future debate over whether any pirated content can ever be justified for training, even if used transformatively. For now, Alsup’s message is clear: legally acquire your data, or face the consequences.

Could Google Have an Edge?

One twist Roetzer raised:

In light of this ruling, Google’s long-running Google Books project may become a strategic asset. Since the early 2000s, Google has scanned tens of millions of books in partnership with publishers and libraries. Courts ruled in 2015 that the project didn’t infringe copyright because it only provided snippets online.

Now, if other courts follow the lead in this ruling, they may affirm the legality of training on legally acquired books. Which means Google would be sitting on the largest, cleanest library of training data in the world.

And you can bet that data will be used.

"The value of books is, when you go train, rather than scraping the internet and all the crap that comes with it, books are high quality," says Roetzer.

"They are unmatched in terms of expertise in different fields and diversity of knowledge. So books will likely get heavier weighting when going into training sets because they generally are higher quality than what you're going to find just randomly across the internet. "

The Bottom Line

Alsup’s decision isn’t binding nationwide, and it’ll almost certainly be appealed. But it introduces a critical precedent in the generative AI legal landscape: that training on copyrighted material can be fair use...if done responsibly.

That sets the stage for future litigation, especially around pirated content, AI outputs, and the real market impact on creative industries.

Related Posts

How to Serve Each Prospect the Perfect Ad with AI

Mike Kaput | November 14, 2023

What if you could show each potential customer the perfect ad tailored exactly to their preferences—and massively increase conversions in the process?

The Most Important AI Developments from Google I/O

Mike Kaput | May 21, 2024

Google just made some huge AI announcements at Google I/O—some of which could have big implications for marketers and business leaders.

OpenAI Cancels Its For-Profit Plans

Mike Kaput | May 13, 2025

OpenAI just pulled off a dramatic pivot—and it could have profound implications for the future of AGI, AI funding, and even global social systems.