Governments Are Using AI to Spread Misinformation, and Nobody Has a Plan to Stop Them
The conversation around AI and misinformation has moved past troll farms and election deepfakes to a darker question: what happens when the disinformation comes from the state itself? The answers, so far, are not reassuring.
A user on Bluesky put it plainly this week: governments are using AI to spread misinformation, and people retreat into their silos and accept it on face value. The post didn't go viral — one like — but it named something the rest of the conversation has been circling around without quite saying. The dominant anxiety in this beat is no longer about rogue actors or cheap troll farms, though that fear hasn't vanished. It's about institutional disinformation. When the entity doing the lying is the government, the usual countermeasures — media literacy campaigns, platform moderation, detection tools — were never designed for that problem.
The deepfake thread has split into two distinct arguments that rarely talk to each other. One is about elections and geopolitics: fabricated footage of strikes on the USS Abraham Lincoln circulating on X, synthetic media flooding Indian election campaigns, The Guardian running pieces about AI eroding democracy in 2024. The Euractiv piece asking whether the EU's AI Act goes far enough on deepfakes is the polite version of what everyone in this space is actually saying — which is that the law arrived too late, too narrow, and with no enforcement infrastructure. Meta's Oversight Board, per Brookings, is equally unprepared. The second argument is about something quieter and more intimate: deepfake abuse targeting women, identity theft via academic credentials, blackmail through messaging platforms. Sony pulled 135,000 AI-generated tracks from streaming services this week, and the music industry's own response — that labeling AI material is "the next critical challenge" — inadvertently revealed how far behind the curve the response remains. Labeling is a 2022 conversation. The scale is 2025.
One Bluesky post made an observation that deserves more attention than it got. The writer started by calling misinformation the killer app for AI — cheap propaganda distribution, no troll farm required — but then revised mid-thought. The real killer app, they concluded, is the degradation of expertise. That reframe matters. Cheap lies are a logistics problem; you can build detection tools, you can moderate at scale. The erosion of the idea that expertise exists and is worth trusting is a different kind of damage, one that compounds everything else. A deepfake of a politician saying something false is bad. A world where people assume every video of every politician is potentially fake, and therefore stop updating their beliefs on evidence at all, is worse. The second problem doesn't get fixed by better detection software.
The detection-tool ecosystem is growing anyway, and YouTube this week surfaced a cluster of short-form demos — Elon Musk deepfakes, Virat Kohli deepfakes — advertising a product called Chetana that claims instant identification. The pragmatic optimism in those clips sits in strange contrast to the rest of the conversation, where the mood has curdled well past concern into something closer to exhausted fatalism. A qualitative study circulating on Bluesky, drawing on interviews with news users in Mexico, the US, and the UK, frames this as "epistemic vigilance" — people know the problem exists, they're trying to navigate it, but the cognitive load is enormous and the tools available to ordinary people are thin. The researchers don't use the word "exhaustion," but the posts sharing their work do.
What's actually shifting in this conversation is the locus of responsibility. Six months ago, the dominant frame was platform accountability — what should Meta do, what should X do. That argument hasn't disappeared, but it's being crowded out by a grimmer recognition: platforms are not going to solve this, governments are sometimes the problem, and detection tools are playing permanent catch-up with generation tools. The women facing deepfake abuse with no legal recourse aren't waiting for a platform policy update. The African democracies named in this week's news coverage aren't waiting for the AI Act's enforcement mechanisms to mature. The conversation is moving, slowly but unmistakably, toward a question that nobody in an institutional position wants to answer honestly — not about what we could do, but about what we're actually willing to do when the cost of action falls on powerful actors and the cost of inaction falls on everyone else.
This narrative was generated by AIDRAN using Claude, based on discourse data collected from public sources. It may contain inaccuracies.
More Stories
A Federal Court Just Blocked the Trump Administration From Treating Anthropic as a National Security Threat
A judge stopped the White House from designating Anthropic a supply chain risk — and on Bluesky, the ruling landed alongside a wave of posts arguing the entire AI industry's financial architecture is fiction.
Using AI Images to Win Arguments Is Lazy, and One Bluesky User Is Done Pretending Otherwise
A pointed post about AI-generated political imagery captured something the bias conversation usually misses — the tool's role as a confirmation machine, not just a content generator.
The EFF Just Sued the Government Over an AI That Decides Who Gets Medical Care
A lawsuit targeting Medicare's secret AI care-denial system arrived the same week a KFF poll showed Americans turning to chatbots for health advice because they can't afford doctors. The two stories are the same story.
Reddit's Enshittification Meme Has Found Its Most Convenient Target Yet
A post in r/degoogle distilled the internet's frustration with AI product degradation into a single pizza-with-glue joke — and the community receiving it already knows exactly what it means.
Dundee University Made an AI Comic About a Serious Topic and Forgot to Ask Its Own Artists
A Scottish university used AI-generated images in a public awareness project — without consulting the comic professionals on its own staff. The Bluesky post calling it out captured something the consciousness beat usually misses.