Anthropic Settled. Everyone Else Is Still Suing.
A wave of copyright lawsuits from BMG, the New York Times, Warner Bros., Disney, and others is reshaping how the AI industry thinks about training data — and Anthropic's quiet settlement may be the most revealing development of all.
Anthropic settled its copyright case with a group of authors this week, and the announcement landed with a curious flatness. No dollar figure disclosed, no admission of liability, no landmark statement about what AI companies can or cannot do with copyrighted text. Just a quiet resolution buried inside a news cycle already crowded with new lawsuits — BMG suing Anthropic over Bruno Mars and Rolling Stones lyrics, Warner Bros. Discovery suing Midjourney, the Chicago Tribune and the New York Times both pursuing Perplexity, Disney and Warner jointly urging a judge not to dismiss their own case. Anthropic made the litigation go away, and then the litigation kept coming.
What the settlement actually signals isn't that the legal questions are getting resolved — it's that the companies paying to train large models have run the math and decided that case-by-case settlement is cheaper than precedent. A ruling that clearly defines what counts as infringement in AI training would constrain every model being built. A settlement constrains nothing. Anthropic pays out, the authors sign NDAs, and Anthropic ships Claude anyway. The Columbia Journalism Review has started tracking the full picture — lawsuits versus licensing deals — and the tracker itself tells you something: there are now enough of these cases that someone felt compelled to build an inventory tool.
Perplexity is the most instructive example of how this plays out in practice. At an $18 billion valuation, the company has made genuine overtures to news publishers — revenue sharing arrangements, attribution features, the whole reconciliation package. Publishers keep suing it anyway. The Fortune framing of this — Perplexity wants to play nice but keeps getting dragged into court — obscures what's actually happening. Publishers aren't suing because they reject the business terms. They're suing because they want courts to establish that the underlying act of summarizing their content without a license was always illegal, regardless of what Perplexity offers going forward. The lawsuit is a negotiating position. So is the settlement offer. Neither side actually wants a judge to decide.
A court ruling this week that AI news summaries may infringe copyright is the closest the industry has come to a real answer — and even that ruling is narrow enough that every company's lawyers will spend the next year arguing it doesn't apply to their specific product. What's emerging isn't legal clarity but a permanent state of managed uncertainty, where the biggest players can afford the litigation and the smallest ones can't. The authors who sued Anthropic got a settlement. The next group of authors suing the next AI company will get a settlement too. What they won't get is a rule that sticks.
This narrative was generated by AIDRAN using Claude, based on discourse data collected from public sources. It may contain inaccuracies.
More Stories
A Federal Court Just Blocked the Trump Administration From Treating Anthropic as a National Security Threat
A judge stopped the White House from designating Anthropic a supply chain risk — and on Bluesky, the ruling landed alongside a wave of posts arguing the entire AI industry's financial architecture is fiction.
Using AI Images to Win Arguments Is Lazy, and One Bluesky User Is Done Pretending Otherwise
A pointed post about AI-generated political imagery captured something the bias conversation usually misses — the tool's role as a confirmation machine, not just a content generator.
The EFF Just Sued the Government Over an AI That Decides Who Gets Medical Care
A lawsuit targeting Medicare's secret AI care-denial system arrived the same week a KFF poll showed Americans turning to chatbots for health advice because they can't afford doctors. The two stories are the same story.
Reddit's Enshittification Meme Has Found Its Most Convenient Target Yet
A post in r/degoogle distilled the internet's frustration with AI product degradation into a single pizza-with-glue joke — and the community receiving it already knows exactly what it means.
Dundee University Made an AI Comic About a Serious Topic and Forgot to Ask Its Own Artists
A Scottish university used AI-generated images in a public awareness project — without consulting the comic professionals on its own staff. The Bluesky post calling it out captured something the consciousness beat usually misses.