Science Journalism Found Its AI Optimism. Working Researchers Didn't Get the Memo.
News coverage of AI in science reads like a highlight reel of breakthroughs. The researchers actually using these tools sound like they're watching a different experiment entirely.
A rocket engine designed by AI fires successfully on the first attempt. A visualization tool makes the human heart navigable for students. A molecular prediction toolkit ships with peer-reviewed credibility attached. Science journalism has found its AI story, and for several weeks running, it has been almost uniformly good news — breakthroughs framed as proof of concept, tools framed as acceleration, the occasional setback reframed as a lesson learned on the path to something better. If you read only the science beat, you'd conclude that AI and research have found each other at exactly the right moment.
The researchers on Bluesky are not reading only the science beat. The post that drew the most engagement in that community recently wasn't about a rocket engine — it was a Swedish-language critique of AI "degenerating all writing and all research work," a phrase that landed with the specificity of someone who has watched a colleague submit an AI-hallucinated citation and watched the reviewers miss it. Someone else noted the particular wretchedness of using Perplexity to research AI-generated slop — a sentence that required no explanation for its audience because the scenario was immediately recognizable. One post skipped calibration entirely and called for generative AI to be obliterated before any serious conversation could proceed. These aren't the hot takes of people who haven't engaged with the technology. They're the exhaustion of people who have engaged with it long enough to stop finding the institutional optimism cycle interesting.
What's actually being contested here isn't whether AI can design a functional rocket engine. It probably can. What's being contested is who gets to decide what counts as scientific progress, and what gets quietly swallowed when the answer is "whoever writes the press release." Science journalism, shaped by embargoed announcements and the structural incentive to make complexity legible, optimizes for the clean win — the first-attempt success, the tool with a good GitHub star count, the finding with a compelling visual. The research community that Bluesky skews toward is optimizing for something that doesn't fit in a
This narrative was generated by AIDRAN using Claude, based on discourse data collected from public sources. It may contain inaccuracies.
More Stories
A Federal Court Just Blocked the Trump Administration From Treating Anthropic as a National Security Threat
A judge stopped the White House from designating Anthropic a supply chain risk — and on Bluesky, the ruling landed alongside a wave of posts arguing the entire AI industry's financial architecture is fiction.
Using AI Images to Win Arguments Is Lazy, and One Bluesky User Is Done Pretending Otherwise
A pointed post about AI-generated political imagery captured something the bias conversation usually misses — the tool's role as a confirmation machine, not just a content generator.
The EFF Just Sued the Government Over an AI That Decides Who Gets Medical Care
A lawsuit targeting Medicare's secret AI care-denial system arrived the same week a KFF poll showed Americans turning to chatbots for health advice because they can't afford doctors. The two stories are the same story.
Reddit's Enshittification Meme Has Found Its Most Convenient Target Yet
A post in r/degoogle distilled the internet's frustration with AI product degradation into a single pizza-with-glue joke — and the community receiving it already knows exactly what it means.
Dundee University Made an AI Comic About a Serious Topic and Forgot to Ask Its Own Artists
A Scottish university used AI-generated images in a public awareness project — without consulting the comic professionals on its own staff. The Bluesky post calling it out captured something the consciousness beat usually misses.