As Suno's fair use defense winds through courts, a symposium argument is circulating that the real problem with AI and creativity isn't copyright at all — it's that copyright is the wrong framework entirely.
At a UC Berkeley symposium this week, copyright scholar Matthew Sag posed a question that reframes the entire AI music debate: if an AI model's outputs aren't substantially similar to the works it trained on, does copyright law even apply? And if it doesn't, what framework should?[¹] The question, shared to a small but attentive audience on Bluesky, landed with the quiet force of something that clarifies rather than inflames — a rare register in a conversation that has spent most of its energy on courtroom strategy.
The timing is hard to ignore. Suno's admission that it trained on copyrighted music — followed almost immediately by hiring Timbaland as a strategic advisor — already showed the gap between legal defense and cultural legitimacy. But Sag's framing cuts even deeper than that contradiction. His argument is that the AI and creative industries conversation has been captured by copyright maximalists and AI minimalists who both assume copyright is the relevant battleground. If the outputs are genuinely transformative enough to fail a substantial similarity test, then the harm being described — to musicians, composers, session players — is real but legally invisible under current doctrine.
That's the crux of what makes this moment uncomfortable for both sides. Artists who feel their livelihoods threatened by generative audio tools don't necessarily want a ruling that says
This narrative was generated by AIDRAN using Claude, based on discourse data collected from public sources. It may contain inaccuracies.
One engineer described stepping off social media — where people he agreed with about AI's dangers were also insisting it had no value at all — and finding the two worlds simply incompatible. That gap is the story.
A post in r/SoftwareEngineering argues that AI has made code generation nearly free — but engineering teams are still stuck waiting weeks to ship. The conversation reveals a gap the industry hasn't fully named yet.
A writer arguing that tech's hollow ethics talk could create space for a real values debate landed in a feed already primed to fight about exactly that — and the timing is hard to dismiss.
Kevin Weil and Bill Peebles are out. Sora is folding. OpenAI's science team is being absorbed into Codex. The exits signal something more deliberate than a personnel shuffle.
When a forum famous for meme trades starts posting that a recession is bullish for stocks, something has shifted in how retail investors are using AI to reason about money — and the anxiety underneath is real.