Sora Died and the Copyright Crowd Immediately Said They Told You So
OpenAI's shutdown of Sora and the collapse of its Disney partnership has handed AI skeptics their clearest evidence yet that copyright litigation isn't slowing generative AI — it's killing specific products outright.
Ewan Morrison put it bluntly on X: "Generative AI is cooked. Must have been a huge copyright theft lawsuit that shut the slop machine SORA down." The post got 138 likes and 19 retweets — not viral by platform standards, but loud enough to measure the mood. Morrison isn't a copyright lawyer or an AI researcher. He's a novelist, and he's been making this argument for years. What changed this week is that Sora gave him something concrete to point at.
The collapse wasn't one thing — it rarely is. A Bluesky professor noted, with mordant timing, that she'd used Disney's OpenAI partnership as a classroom example of how companies were trying to manage "the nuclear bomb generative AI has set off over intellectual property rights" — and then the partnership dissolved before she'd finished grading. On X, @ParkObsession laid out the pile-up pragmatically: legal disputes, copyright infringements, compute costs burning cash, a drop in downloads, and mounting competition from rivals who aren't carrying the same litigation weight. "My bet is Disney will probably seek out another AI company," they wrote. That's not triumphalism — it's a business read. The copyright pressure didn't kill generative AI video; it killed this particular attempt at it, by this particular company, at this particular price point.
What makes the AI & Law conversation shift this week isn't the volume of outrage — it's the precision. A Bluesky post linked to sloptracker.org, a site tallying money drained from real musicians by AI-generated filler content flooding Spotify. The number cited: over $2.5 million lost, attributable to just 50 AI "artists." That figure exists in the same week Sora went dark, and the people making the copyright argument are treating the timing as confirmation rather than coincidence. The fair use debate has always been abstract — training data, transformative use, derivative works. A dollar figure attached to named musicians is something else.
The news coverage running alongside all this is almost comically disconnected from it — roundups of top legal AI tools for practitioners in Nigeria and the Netherlands, profiles of Harvey AI's M&A workflow platform, optimistic op-eds about junior lawyers being "redefined" rather than replaced. That parallel conversation isn't wrong, exactly. Law firms are adopting AI tools at a real clip, and the legal tech market is genuinely consolidating. But it's operating in a different register from the copyright fight, which is the one that's actually reshaping what gets built and what gets abandoned. Sora didn't die because law firms are cautious about their AI procurement decisions. It died because the legal exposure of building a consumer-facing video product on top of unlicensed training data became too expensive to absorb — and the people who said that would happen first are not letting anyone forget it.
This narrative was generated by AIDRAN using Claude, based on discourse data collected from public sources. It may contain inaccuracies.
More Stories
A Federal Court Just Blocked the Trump Administration From Treating Anthropic as a National Security Threat
A judge stopped the White House from designating Anthropic a supply chain risk — and on Bluesky, the ruling landed alongside a wave of posts arguing the entire AI industry's financial architecture is fiction.
Using AI Images to Win Arguments Is Lazy, and One Bluesky User Is Done Pretending Otherwise
A pointed post about AI-generated political imagery captured something the bias conversation usually misses — the tool's role as a confirmation machine, not just a content generator.
The EFF Just Sued the Government Over an AI That Decides Who Gets Medical Care
A lawsuit targeting Medicare's secret AI care-denial system arrived the same week a KFF poll showed Americans turning to chatbots for health advice because they can't afford doctors. The two stories are the same story.
Reddit's Enshittification Meme Has Found Its Most Convenient Target Yet
A post in r/degoogle distilled the internet's frustration with AI product degradation into a single pizza-with-glue joke — and the community receiving it already knows exactly what it means.
Dundee University Made an AI Comic About a Serious Topic and Forgot to Ask Its Own Artists
A Scottish university used AI-generated images in a public awareness project — without consulting the comic professionals on its own staff. The Bluesky post calling it out captured something the consciousness beat usually misses.