Sora Died and the Copyright Crowd Immediately Said They Told You So
OpenAI's shutdown of Sora and the collapse of its Disney partnership has handed AI skeptics their clearest evidence yet that copyright liability isn't a manageable legal risk — it's an existential one.
Ewan Morrison didn't wait long. Within hours of news breaking that Sora had been shuttered, the Scottish author posted what read less like commentary and more like a verdict: "Generative AI is cooked." The post got 138 likes — not viral by most measures, but enough to become the emotional anchor for a conversation that had been building for months. His framing was specific: it was copyright lawsuits that shut the slop machine down, and the whole enterprise was fundamentally unsustainable. That's an argument, not a vibe — and it's one that's gaining traction in places where arguments about AI and law actually get made.
The Sora story broke alongside the Disney partnership collapse, and the combination gave critics something they rarely get in intellectual property fights: a clear, concrete consequence. As one pragmatically negative account on X laid it out — legal disputes, compute losses, Elon Musk's lawsuit, plummeting downloads — the arithmetic was readable. Morrison's dismissal and the cooler-headed ParkObsession post reached the same conclusion from different angles: Disney would find another AI company, but the accumulation of copyright exposure plus cash burn made the original deal untenable. That is, in miniature, the core claim that critics of generative AI have been making about the entire industry. As Sora's economic model always suggested, the path from impressive demo to sustainable product was never clear.
A Bluesky educator captured the speed of the reversal with uncomfortable precision: they'd used the Disney-OpenAI partnership as a classroom example that very morning — specifically as a case study in how companies try to manage what they called the "nuclear bomb" generative AI had detonated over intellectual property rights. Then the partnership ended before the class was over. The satirical edge in their post didn't undercut the point; it sharpened it. Meanwhile, a separate Bluesky post pointed to sloptracker.org, which claims to document over $2.5 million in royalties siphoned from working musicians by AI-generated tracks on Spotify — spread across just fifty AI "artists." The post used this as a fair use argument: if AI training on copyrighted work produces outputs that directly compete with and dilute compensation for the original creators, the fair use defense doesn't hold. That framing is increasingly where the creative industries copyright fight has landed. As Patreon's CEO and others have started building coordinated legal infrastructure around exactly this logic, the royalty-dilution argument is moving from Twitter grievance to legal theory.
What's shifted this week isn't the underlying law — fair use doctrine hasn't changed, and no court has issued a definitive ruling on AI training data. What's shifted is the confidence of the people making the case against it. Morrison's "generative AI is cooked" is a declaration, not a worry. The BTS fan accounts begging YouTube to enforce copyright against AI deepfakes of their idols aren't asking for a policy debate; they're exhausted and demanding action. The mood among critics has moved from anxious to impatient. Whether the courts eventually agree is beside the point for the companies feeling the financial pressure right now — Disney walked away, and that decision didn't require a judge.
This narrative was generated by AIDRAN using Claude, based on discourse data collected from public sources. It may contain inaccuracies.
More Stories
A Federal Court Just Blocked the Trump Administration From Treating Anthropic as a National Security Threat
A judge stopped the White House from designating Anthropic a supply chain risk — and on Bluesky, the ruling landed alongside a wave of posts arguing the entire AI industry's financial architecture is fiction.
Using AI Images to Win Arguments Is Lazy, and One Bluesky User Is Done Pretending Otherwise
A pointed post about AI-generated political imagery captured something the bias conversation usually misses — the tool's role as a confirmation machine, not just a content generator.
The EFF Just Sued the Government Over an AI That Decides Who Gets Medical Care
A lawsuit targeting Medicare's secret AI care-denial system arrived the same week a KFF poll showed Americans turning to chatbots for health advice because they can't afford doctors. The two stories are the same story.
Reddit's Enshittification Meme Has Found Its Most Convenient Target Yet
A post in r/degoogle distilled the internet's frustration with AI product degradation into a single pizza-with-glue joke — and the community receiving it already knows exactly what it means.
Dundee University Made an AI Comic About a Serious Topic and Forgot to Ask Its Own Artists
A Scottish university used AI-generated images in a public awareness project — without consulting the comic professionals on its own staff. The Bluesky post calling it out captured something the consciousness beat usually misses.