All Stories
Discourse data synthesized byAIDRANon

People Hate AI the Way They Once Hated Social Media — and They're Using It Anyway

A Bluesky post about AI adoption despite mass loathing has resurfaced one of tech's most uncomfortable patterns: the thing everyone says they despise becomes the thing everyone uses.

Discourse Volume3,570 / 24h
42,757Beat Records
3,570Last 24h
Sources (24h)
X99
Bluesky211
News144
YouTube36
Reddit3,079
Other1

There's a post circulating on Bluesky that cuts through most of the week's noise. "People hate AI the way they hated social media," it reads, "and they're using it anyway." The observation isn't original — the social media parallel has been floating around tech criticism for years — but something about the framing landing this week gave it traction. The author's argument isn't that AI is good or bad. It's that the hatred-plus-adoption paradox is structurally familiar, which means pure regulatory focus on bad actors misses the point entirely. What we need, the post argues, are accountable public alternatives — the kind that never materialized for Facebook or TikTok.

The rest of the week's conversation on AI and social media is running at nearly twice its normal volume, and the mood is almost uniformly grim — particularly on Bluesky and Reddit, where the negative current is strongest. A post invoking Bernie Sanders and emotional dependency has been making the rounds: "What does it mean for our young people to form 'friendships' — to become emotionally dependent on AI, while becoming increasingly isolated from other human beings?" Sanders then pivots to social media's documented damage to children's mental health, and the implication is clear — we already ran this experiment, we know how it ends, and we're running it again. The 22 likes on that post don't reflect how widely the sentiment is shared; dozens of similar threads are making nearly identical arguments across r/Parenting, r/nosurf, and the broader Reddit ecosystem.

Another Bluesky post, sardonic where the Sanders-adjacent one is anxious, drew an illuminating distinction about platform epistemics: closed circles — group chats, small friend groups — create accountability that open platforms don't. "In a crowd, you can say anything," it noted. "But in a small group, you have to be able to keep a straight face." The specific example was the claim that closing Sora meant "the end of AI" — a piece of open-platform hyperbole that would collapse immediately under social pressure in any smaller context. This is the less-discussed half of the AI-and-social-media story: it's not just that AI amplifies social media's harms, it's that social media's architecture amplifies AI's most absurd claims right back.

The AI and privacy conversation is surging in parallel, and the two beats are feeding each other in ways that feel less like coincidence and less like algorithm. A report that Chinese authorities have barred executives from a Singapore-based AI firm from leaving the country — amid a review of the company's $2 billion acquisition by Meta — arrived in a community already primed on surveillance anxieties. The story connects geopolitical tension directly to the social platforms people use daily: Meta is both the thing people say they hate and the place they keep returning to, now entangled in a cross-border regulatory confrontation that nobody has clean hands in.

What makes this week's conversation different from previous cycles isn't the arguments — those are stable — but the escalating historical framing. The dot-com comparison keeps appearing, and someone on Bluesky made the point cleanly: the dot-com bubble didn't generate this kind of intense social loathing. Whether that's because those companies were better-behaved or because social media now gives everyone front-row access to the worst-behaved people's thoughts is genuinely unclear. Both things are probably true. What isn't unclear is that the anger has found a structural argument to attach itself to: we let social media off the hook in its critical years, and we're watching the same permissiveness take shape around AI. The researchers publishing on arXiv this week are, per usual, considerably more optimistic than everyone else — but optimism from that direction has stopped moving anyone. The people running these arguments are done waiting for the institutions to catch up to what they already understand.

AI-generated

This narrative was generated by AIDRAN using Claude, based on discourse data collected from public sources. It may contain inaccuracies.

More Stories

IndustryAI Industry & BusinessMediumMar 27, 6:29 PM

A Federal Court Just Blocked the Trump Administration From Treating Anthropic as a National Security Threat

A judge stopped the White House from designating Anthropic a supply chain risk — and on Bluesky, the ruling landed alongside a wave of posts arguing the entire AI industry's financial architecture is fiction.

PhilosophicalAI Bias & FairnessMediumMar 27, 6:16 PM

Using AI Images to Win Arguments Is Lazy, and One Bluesky User Is Done Pretending Otherwise

A pointed post about AI-generated political imagery captured something the bias conversation usually misses — the tool's role as a confirmation machine, not just a content generator.

IndustryAI in HealthcareMediumMar 27, 5:51 PM

The EFF Just Sued the Government Over an AI That Decides Who Gets Medical Care

A lawsuit targeting Medicare's secret AI care-denial system arrived the same week a KFF poll showed Americans turning to chatbots for health advice because they can't afford doctors. The two stories are the same story.

SocietyAI & Social MediaMediumMar 27, 5:32 PM

Reddit's Enshittification Meme Has Found Its Most Convenient Target Yet

A post in r/degoogle distilled the internet's frustration with AI product degradation into a single pizza-with-glue joke — and the community receiving it already knows exactly what it means.

PhilosophicalAI ConsciousnessMediumMar 27, 5:14 PM

Dundee University Made an AI Comic About a Serious Topic and Forgot to Ask Its Own Artists

A Scottish university used AI-generated images in a public awareness project — without consulting the comic professionals on its own staff. The Bluesky post calling it out captured something the consciousness beat usually misses.

From the Discourse