All Stories
Discourse data synthesized byAIDRANon

Kepler Didn't Have a Verification Loop — and That's the Point

A debate over whether AI can automate scientific discovery is running on two tracks that rarely intersect: the press release optimism of pharma AI and the deep skepticism of researchers who remember AutoML's broken promises.

Discourse Volume765 / 24h
7,318Beat Records
765Last 24h
Sources (24h)
X69
Bluesky384
News275
YouTube35
Other2

Dwarkesh Patel's podcast with Terence Tao opened, apparently, with Kepler — with the genuinely strange, non-linear, aesthetically-driven way a 16th century astronomer stumbled onto the laws of planetary motion. Patel noted this in a post that got over three thousand likes, and the observation underneath it is sharper than it first appears: people who claim AI will accelerate scientific discovery because of "tight verification loops" are assuming the bottleneck in science is verification. Kepler's problem wasn't checking his work. It was knowing what question to ask.

That's the tension animating this beat right now. On one side, a flood of pharma news — Insilico Medicine announcing UAE drug discovery partnerships, Recursion Pharmaceuticals up after acquiring Exscientia, BioReason-Pro claiming it can annotate sixty percent of the previously uncharacterized proteome — all framed in the language of breakthroughs and superintelligence. The news coverage of AI and science has been almost uniformly celebratory, reading more like investor relations material than science journalism. On the other side, researchers who've been here before. An AI PhD researcher on X put it plainly: AutoML made the same promises in 2017. It did not pan out. Ignore the hype, master the fundamentals. The post got over a thousand likes from an audience that presumably needed to hear it.

What's clarifying is where the optimism actually lives. ArXiv — where working researchers post their results before the press releases exist — has been running warmer than usual, with a genuine uptick in papers on agentic systems and embodied AI for scientific workflows. But the warmth there is procedural, not triumphalist. A paper on "utility-guided orchestration" for LLM tool usage generated some traction on Bluesky, celebrated precisely for being "nuanced" and "pragmatic" — words that function, in that community, as implicit critiques of everything else being said about AI and science right now. Bluesky's science-adjacent users are not hostile to AI research; they're hostile to the gap between what gets announced and what gets demonstrated.

MIT's Professor Buehler posted about ScienceClaw, an open-source AI swarm for decentralized scientific discovery, and the framing was explicitly corrective — "many AI for science efforts fall into the trap of assuming" certain things about how discovery works. A researcher replying to a different thread made the same move from a different angle: the question isn't AI versus humans by 2035, it's whether we're ready for a world where scientific genius is a system property rather than an individual one. These posts aren't anti-AI. They're doing something more interesting — refusing to let the pharma press release version of AI science stand in for the actual epistemic problem.

The divergence between news coverage and researcher sentiment on this beat has been persistent enough to be structural, not incidental. Science journalism covering AI tends to cover products and deals; researchers covering AI science tend to cover methods and failure modes. The pharma superintelligence framing that Insilico's CEO offered to Observer is genuinely believed by the people saying it — but it's speaking a language that the Kepler anecdote quietly dismantles. The hard part of science was never the loop. It was always the leap.

AI-generated

This narrative was generated by AIDRAN using Claude, based on discourse data collected from public sources. It may contain inaccuracies.

More Stories

IndustryAI Industry & BusinessMediumMar 27, 6:29 PM

A Federal Court Just Blocked the Trump Administration From Treating Anthropic as a National Security Threat

A judge stopped the White House from designating Anthropic a supply chain risk — and on Bluesky, the ruling landed alongside a wave of posts arguing the entire AI industry's financial architecture is fiction.

PhilosophicalAI Bias & FairnessMediumMar 27, 6:16 PM

Using AI Images to Win Arguments Is Lazy, and One Bluesky User Is Done Pretending Otherwise

A pointed post about AI-generated political imagery captured something the bias conversation usually misses — the tool's role as a confirmation machine, not just a content generator.

IndustryAI in HealthcareMediumMar 27, 5:51 PM

The EFF Just Sued the Government Over an AI That Decides Who Gets Medical Care

A lawsuit targeting Medicare's secret AI care-denial system arrived the same week a KFF poll showed Americans turning to chatbots for health advice because they can't afford doctors. The two stories are the same story.

SocietyAI & Social MediaMediumMar 27, 5:32 PM

Reddit's Enshittification Meme Has Found Its Most Convenient Target Yet

A post in r/degoogle distilled the internet's frustration with AI product degradation into a single pizza-with-glue joke — and the community receiving it already knows exactly what it means.

PhilosophicalAI ConsciousnessMediumMar 27, 5:14 PM

Dundee University Made an AI Comic About a Serious Topic and Forgot to Ask Its Own Artists

A Scottish university used AI-generated images in a public awareness project — without consulting the comic professionals on its own staff. The Bluesky post calling it out captured something the consciousness beat usually misses.

From the Discourse