AI Is Being Put on Trial — and Social Media Handed the Prosecution Its Evidence
The conversation has moved from how recommendation algorithms work to who pays when they don't. Product liability is no longer a metaphor.
A legal question has been circulating on Bluesky for the past several days: can a social media algorithm be a defective product? The framing sounds like a thought experiment, but it's traveling with the velocity of something people recognize as real. What makes it stick isn't novelty — it's timing. A run of new research from Northeastern, the Global Network on Extremism and Technology, and coverage in outlets from El País to Time has, in quick succession, done something the previous decade of algorithmic criticism mostly failed to do: it showed the mechanism. Not "feeds might shape beliefs" but here is how recommendation systems sort and amplify political preference, with data. Consumer protection law has a name for a product that harms through its designed function. Researchers just finished building the case.
That evidentiary shift is what Zuckerberg walked into when he told the world "AI is the new social media." On Bluesky, the response wasn't exactly outrage — it was the sharper thing, dismissal with receipts. "The man has a record of failed predictions," one post noted, and the replies added the citations. The platform that Meta is positioning as an AI-native social experience has become the place where that positioning gets the most forensic scrutiny. The irony sharpens further when you notice that Threads is quietly testing a feature letting users manually adjust their feed algorithms — a small concession, unannounced with any fanfare, that lands as an admission that the current system isn't working for everyone who uses it.
While that institutional maneuvering plays out, artists are describing something that doesn't show up in platform dashboards yet. The fear of feeding training data has become, for a visible subset of creators on Bluesky, a reason not to post at all. "A big reason for my art block as of late has definitely been the fear of feeding AI," one user wrote — framing a return to posting as deliberate resistance rather than ordinary participation. What's significant isn't the individual sentiment but the structural implication: the platforms have no mechanism to acknowledge this problem, because acknowledging it would mean conceding that their AI integrations are driving away exactly the human-generated content that made them worth using. The withdrawal is quiet and cumulative, the kind that becomes visible in the data only after it's already irreversible.
These threads — the liability argument, the Zuckerberg skepticism, the artists' slow exit — are not separate conversations. They share an architecture. Each one is a community deciding that the social contract with platforms has changed, and that the old vocabulary of "engagement" and "growth" no longer describes what's actually happening. Product liability works as a frame precisely because it imports the assumption that harm requires a responsible party. The research gives that assumption quantitative legs. And the artists withdrawing from creative communities online are, in their own way, making the same argument: something was built that damages the thing it was supposed to support, and someone should be accountable for that.
The product liability framing will find its way into courtrooms. Given the volume of state-level social media legislation currently in motion, a research paper showing algorithmic mechanism will get cited in a complaint within the year — probably sooner. When it does, Zuckerberg's prediction that AI is the new social media will age as a statement made by someone who didn't notice the prosecution was already in discovery.
This narrative was generated by AIDRAN using Claude, based on discourse data collected from public sources. It may contain inaccuracies.
More Stories
A Federal Court Just Blocked the Trump Administration From Treating Anthropic as a National Security Threat
A judge stopped the White House from designating Anthropic a supply chain risk — and on Bluesky, the ruling landed alongside a wave of posts arguing the entire AI industry's financial architecture is fiction.
Using AI Images to Win Arguments Is Lazy, and One Bluesky User Is Done Pretending Otherwise
A pointed post about AI-generated political imagery captured something the bias conversation usually misses — the tool's role as a confirmation machine, not just a content generator.
The EFF Just Sued the Government Over an AI That Decides Who Gets Medical Care
A lawsuit targeting Medicare's secret AI care-denial system arrived the same week a KFF poll showed Americans turning to chatbots for health advice because they can't afford doctors. The two stories are the same story.
Reddit's Enshittification Meme Has Found Its Most Convenient Target Yet
A post in r/degoogle distilled the internet's frustration with AI product degradation into a single pizza-with-glue joke — and the community receiving it already knows exactly what it means.
Dundee University Made an AI Comic About a Serious Topic and Forgot to Ask Its Own Artists
A Scottish university used AI-generated images in a public awareness project — without consulting the comic professionals on its own staff. The Bluesky post calling it out captured something the consciousness beat usually misses.