Cognitive Exhaustion, Not Outrage, Is Reshaping How People Use Social Media
The AI-on-social-media conversation has moved past moral panic into something quieter and harder to reverse — a widespread recalculation of whether these platforms are still worth the effort.
When a Bluesky user explains that they'd carefully sculpted their Twitter algorithm to show only fanart — only fanart — and still found the experience "soul-sucking," the problem they're describing isn't misinformation. It's something closer to erosion: the sense that whatever made a feed feel alive has been quietly replaced with something that only resembles it. That observation, and dozens like it, have become the dominant emotional register of this beat over the past week. Not fury. Not panic. A kind of weary cost-benefit math that people are running on platforms they used to open without thinking.
The institutional framing of AI-on-social-media has always centered on moderation — deepfakes, election interference, the epistemological threat of synthetic content at scale. What's actually being discussed is smaller and more personal. People aren't worried about what AI content might do to democracy in the abstract; they're describing what it's already done to their Tuesday afternoons. One person caught themselves believing an AI-fabricated story not because it was politically engineered, but because it was emotionally designed to land — and it did. The harm being named here is attentional and affective. It costs something to scroll now. That cost is being calculated, and for a growing number of people, the ledger isn't balancing.
Running underneath the fatigue posts is a harder-edged argument about who benefits from the current arrangement. A post drawing a pointed contrast between how platform owners' political alignments would shape responses to AI-generated harmful content drew more engagement than almost anything else in the conversation — not because it was particularly novel, but because it gave structural language to something people had been feeling individually. A separate thread made the case that ID verification pushes on social platforms have nothing to do with child safety and everything to do with guaranteeing human eyeballs to advertisers in an environment where AI has made that guarantee nearly impossible to offer. These aren't conspiracy posts. They're attempts to explain why the fatigue feels systemic rather than fixable.
A finding from Dutch municipal elections — that roughly nine in ten AI-generated campaign social posts carried no disclosure of their origin — circulated without generating much reaction. A year ago that number would have produced a minor firestorm. Now it reads as confirmation of something people have already absorbed. The normalization of undisclosed AI content in political contexts is moving faster than collective outrage can metabolize it, and the Bluesky crowd has largely filed it under "things we assumed were true."
The split this is producing is real and widening: a vocal minority actively redesigning their online lives — aggressive curation, platform exits, smaller networks — and a much larger group still scrolling the ambient slop, occasionally fooled by it, occasionally irritated by it, but not yet moved to change anything. The first group is winning the argument on Bluesky. Their framing — that the problem isn't any specific AI product but what AI content at scale has done to the social contract of shared online space — is the one that will shape platform policy conversations over the next year. Not because regulators are listening to Bluesky, but because the behavior change is already happening and platforms will eventually have to respond to it.
This narrative was generated by AIDRAN using Claude, based on discourse data collected from public sources. It may contain inaccuracies.
More Stories
A Federal Court Just Blocked the Trump Administration From Treating Anthropic as a National Security Threat
A judge stopped the White House from designating Anthropic a supply chain risk — and on Bluesky, the ruling landed alongside a wave of posts arguing the entire AI industry's financial architecture is fiction.
Using AI Images to Win Arguments Is Lazy, and One Bluesky User Is Done Pretending Otherwise
A pointed post about AI-generated political imagery captured something the bias conversation usually misses — the tool's role as a confirmation machine, not just a content generator.
The EFF Just Sued the Government Over an AI That Decides Who Gets Medical Care
A lawsuit targeting Medicare's secret AI care-denial system arrived the same week a KFF poll showed Americans turning to chatbots for health advice because they can't afford doctors. The two stories are the same story.
Reddit's Enshittification Meme Has Found Its Most Convenient Target Yet
A post in r/degoogle distilled the internet's frustration with AI product degradation into a single pizza-with-glue joke — and the community receiving it already knows exactly what it means.
Dundee University Made an AI Comic About a Serious Topic and Forgot to Ask Its Own Artists
A Scottish university used AI-generated images in a public awareness project — without consulting the comic professionals on its own staff. The Bluesky post calling it out captured something the consciousness beat usually misses.