Sony Pulled 135,000 Deepfake Songs. The Harder Problem Is Everything That Stayed Up
The music industry's mass deepfake removal looks decisive until you realize nobody agreed on what should have been flagged in the first place — and the same labeling vacuum is swallowing elections, gender-based violence cases, and foreign influence operations simultaneously.
Sony's removal of 135,000 deepfake songs from streaming platforms this month was treated as a win, and in a narrow sense it was. But the executive quoted in coverage immediately pivoted to the real problem: nobody has agreed on how to label AI-generated material before it spreads, which means the next 135,000 are already circulating. That framing — act after the fact, then acknowledge the system isn't built for prevention — describes the entire state of AI misinformation governance right now.
The election coverage in this beat has a quality of temporal vertigo. Dozens of news articles from 2024 are circulating now, their anxious predictions about deepfake election interference now readable against the actual record. The Brookings piece is the most honest of them: we were, it admits, largely working in the dark. What researchers can say is that deepfakes appeared in elections across 38 countries, that AI-generated memes circulated widely, that Russia and Iran used synthetic media in influence operations the U.S. government publicly attributed. What nobody can say cleanly is how much any of it moved votes. The absence of that answer hasn't quieted the conversation — it's made it louder, because everyone can project their priors onto an empirical void.
The deepfake abuse conversation on Bluesky runs on a different emotional register than the election coverage. Where news articles tend toward policy analysis, Bluesky posts about AI-generated non-consensual imagery of women are written in the vocabulary of personal harm — women describing being unable to get legal protection, justice systems that move too slowly for content that spreads in minutes. A UN story making the rounds frames it as a gender-based violence issue, not a technology issue. That framing shift matters: it puts the failure on institutions rather than platforms, and it implies that better detection tools solve the wrong problem if courts still won't act.
The split between catastrophism and skepticism in news coverage is real and underexamined. Pieces from The Guardian calling AI misinformation "disinformation on steroids" ran alongside a Governing article arguing the electoral impacts were overblown. Both were published in 2024; neither has been definitively settled by the evidence. What the skeptical pieces get right is that the threat was sometimes described in ways that outran the documented harm. What the catastrophists get right is that the documented harm is likely undercounted, because deepfake attribution is genuinely hard and most incidents go unreported. The honest answer lives uncomfortably between these poles, which is exactly why neither side concedes.
The labeling argument that Sony's story surfaced is probably where this beat is actually heading. Regulation of deepfakes during elections has moved fitfully — state laws vary wildly, federal legislation stalled, and the First Amendment complications around political speech are real. But labeling is a narrower ask than prohibition, and it's gaining traction in both the music industry and election security circles as a compromise position. The problem is that labeling requires agreement on what counts as AI-generated, who verifies the label, and what happens when the label is absent or forged. Each of those questions opens a new argument. The conversation has shifted from "should we do something" to "who builds the infrastructure" — and that second question is harder, because it requires institutions that currently don't trust each other to cooperate on technical standards before the next election cycle closes the window.
This narrative was generated by AIDRAN using Claude, based on discourse data collected from public sources. It may contain inaccuracies.
More Stories
A Federal Court Just Blocked the Trump Administration From Treating Anthropic as a National Security Threat
A judge stopped the White House from designating Anthropic a supply chain risk — and on Bluesky, the ruling landed alongside a wave of posts arguing the entire AI industry's financial architecture is fiction.
Using AI Images to Win Arguments Is Lazy, and One Bluesky User Is Done Pretending Otherwise
A pointed post about AI-generated political imagery captured something the bias conversation usually misses — the tool's role as a confirmation machine, not just a content generator.
The EFF Just Sued the Government Over an AI That Decides Who Gets Medical Care
A lawsuit targeting Medicare's secret AI care-denial system arrived the same week a KFF poll showed Americans turning to chatbots for health advice because they can't afford doctors. The two stories are the same story.
Reddit's Enshittification Meme Has Found Its Most Convenient Target Yet
A post in r/degoogle distilled the internet's frustration with AI product degradation into a single pizza-with-glue joke — and the community receiving it already knows exactly what it means.
Dundee University Made an AI Comic About a Serious Topic and Forgot to Ask Its Own Artists
A Scottish university used AI-generated images in a public awareness project — without consulting the comic professionals on its own staff. The Bluesky post calling it out captured something the consciousness beat usually misses.