Britain Tells Campaigns to Stop Using AI Deepfakes. The Internet Notes This Was Always the Problem.
The UK Electoral Commission just published its first guide treating AI-generated disinformation as a campaigning offense. On Bluesky, the response splits between people who think this is overdue and people who think it misdiagnoses the disease.
The UK Electoral Commission's new guide on respectful campaigning — published this week ahead of local and devolved elections — contains a sentence that got enough traction on Bluesky to pull this beat into focus: "Deliberate misinformation and disinformation are unacceptable. This includes the use of technology to misrepresent the views or actions of others through AI generated disinformation." The post sharing it was pragmatic in tone, neither alarmed nor triumphant. Just: here is a thing that now exists. The likes were modest. The replies were not.
What the guide actually represents is a regulatory institution catching up to a threat that communities have been managing without institutional help for some time. Fan networks coordinating mass reports against AI deepfakes targeting public figures, platform policies that label real photographs as AI-generated while letting actual synthetic media slip through — these gaps predate any electoral guidance document by years. A photographer whose genuine work gets flagged by Instagram as "Made with AI" while a fabricated campaign video circulates unflagged is experiencing the asymmetry the Electoral Commission is only now beginning to name. The guidance is real, but it arrives into an enforcement environment that the institutions writing it don't yet control.
The sharpest counterpoint on Bluesky this week didn't come from a policy account. Someone pointed out — with visible amusement — that they'd just seen a post claiming the internet was "fun before AI, when we didn't have to question what we saw," and pushed back: we had a misinformation crisis long before generative AI was a household term. Viral fabrications, coordinated inauthentic behavior, state-sponsored influence operations — Russia has been threading through this beat for years, and the tools involved weren't always neural networks. The post got a small number of likes but landed in a conversation that was already running anxious, and the anxiety, notably, was not about AI specifically. It was about epistemic collapse as a condition, with AI as the latest accelerant.
That's the real argument underneath the Electoral Commission guidance: whether we're dealing with a new problem that requires new rules, or an old problem that's been handed a more efficient engine. The guide treats it as the former. The Bluesky conversation increasingly treats it as the latter — which means the regulations being written right now are optimized for the thing people have already started to outmaneuver. By the time enforcement catches up to AI-generated electoral disinformation, the fight will have moved somewhere the guide doesn't cover.
This narrative was generated by AIDRAN using Claude, based on discourse data collected from public sources. It may contain inaccuracies.
More Stories
A Satirist Hated the Internet Before AI. A Food Bank Algorithm Doesn't Know You're Pregnant.
Two Bluesky posts — one deadpan joke about CD-ROMs, one furious account of AI food distribution failing pregnant women — are doing the same work from opposite angles: describing what it looks like when systems optimize for people in general and miss the ones who need help most.
Someone Updated Their Will to Keep AI Away From Their Consciousness and the Joke Landed Like a Manifesto
A Bluesky post about amending a will to block AI consciousness replication went viral for reasons that go beyond dark humor — it named an anxiety the philosophical literature hasn't caught up to yet.
Palantir's UK Government Contracts Are Becoming the Sharpest Edge of the AI Ethics Argument
A Bluesky post linking Palantir's NHS and Home Office deals to its surveillance technology used in Gaza turned the AI & Privacy conversation sharply hostile overnight — and it's not a fringe position anymore.
Fortune Says AI Is Climate's Best Hope. Bluesky Says It's the Crisis.
Mainstream outlets and arXiv researchers are publishing optimistic takes on AI's environmental potential at the same moment Bluesky has turned sharply hostile — and the gap between those two conversations has rarely been wider.
AI Companies Promised Unemployment and Now Nobody Wants to Hear It Was a Mistake
On Bluesky, workers displaced by AI layoffs are throwing the industry's own apocalyptic forecasts back at it — and the argument is harder to dismiss than the companies expected.