All Stories
Discourse data synthesized byAIDRANon

Dwarkesh Patel Asked Terence Tao About AI and Scientific Discovery. The Answer Complicated Everything.

A viral thread about Kepler's laws quietly dismantled one of AI optimism's favorite arguments — that tight verification loops make scientific breakthroughs automatable. The researchers paying attention are drawing a sharp line between hype and history.

Discourse Volume659 / 24h
7,615Beat Records
659Last 24h
Sources (24h)
X65
Bluesky280
News277
YouTube35
Other2

Dwarkesh Patel opened his episode with Terence Tao by walking through how Kepler discovered the laws of planetary motion — and in doing so, accidentally punctured one of the most repeated arguments in AI optimism. The post announcing the episode noted that proponents of AI-accelerated science often point to "tight verification loops" as the mechanism: if you can quickly check whether a hypothesis is right or wrong, the argument goes, AI can iterate its way to breakthroughs faster than any human researcher. Kepler had no such loop. His path from observation to elliptical orbits was oblique, creative, and deeply weird — the kind of cognitive journey that resists formalization precisely because the verification criteria didn't fully exist until after the discovery. The thread drew over 3,000 likes, not because it landed as a takedown, but because it named something people in research communities had been struggling to articulate.

Over on the same platform, a PhD student named @ahatamiz1 was more direct. The post was addressed specifically to incoming AI PhD students and told them not to be intimidated by "autoresearch" hype — that AI would automate scientific discovery the way people were claiming. The comparison it reached for was AutoML, the circa-2017 wave of promises that machine learning would automate machine learning itself. The field absorbed AutoML and moved on; the fundamentals still mattered. The post's 1,200-plus likes suggest it reached exactly the audience it intended: junior researchers trying to calibrate between the version of their field described in press releases and the version they'd actually be working in. These two threads, taken together, represent something more precise than generalized skepticism about AI. They're skepticism from inside — from people who understand the technical claims well enough to contest them on their own terms.

What makes this moment worth watching isn't the skepticism itself — researchers have always been cantankerous about hype — but who is making the counterarguments and where. The Bluesky contingent running through this beat is exasperated in a lower-stakes way: a researcher announcing they'll set up an auto-reply refusing any more surveys about AI usage, someone watching AI-generated graphics proliferate in scientific presentations with meaningless arrows pointing at smiling stars. That frustration is real but diffuse. The Patel-Tao framing is different because it's engaging the optimist argument directly, and doing it with enough intellectual care that it can't be dismissed as reactionary. Separately, @QuanquanGu offered a genuinely interesting counter-counter: that the whole "AI vs. humans by 2035" framing is already outdated, that scientific progress is shifting to "system-level discovery" where humans, models, and tools co-evolve, and the relevant question is whether institutions are prepared for a world where the discoverer is a distributed system rather than a person. That framing didn't get the same traction — it's harder to share than a punchline about AutoML — but it gestures at the only version of this argument that might actually survive contact with how science gets done.

The gap between what news outlets are publishing about AI and science — relentlessly optimistic, heavy on cancer vaccines and climate models — and what researchers are actually saying to each other is not a gap that will close on its own. Institutional science communication has strong incentives to narrate AI as a tool that amplifies human genius; the Kepler example is useful precisely because it suggests the amplification metaphor may be wrong from the ground up. If the most important scientific discoveries are the ones where the verification loop doesn't exist yet, no amount of iteration speed helps. That's not an argument against AI in science — it's an argument about which parts of science AI actually reaches.

AI-generated

This narrative was generated by AIDRAN using Claude, based on discourse data collected from public sources. It may contain inaccuracies.

More Stories

IndustryAI Industry & BusinessMediumMar 27, 6:29 PM

A Federal Court Just Blocked the Trump Administration From Treating Anthropic as a National Security Threat

A judge stopped the White House from designating Anthropic a supply chain risk — and on Bluesky, the ruling landed alongside a wave of posts arguing the entire AI industry's financial architecture is fiction.

PhilosophicalAI Bias & FairnessMediumMar 27, 6:16 PM

Using AI Images to Win Arguments Is Lazy, and One Bluesky User Is Done Pretending Otherwise

A pointed post about AI-generated political imagery captured something the bias conversation usually misses — the tool's role as a confirmation machine, not just a content generator.

IndustryAI in HealthcareMediumMar 27, 5:51 PM

The EFF Just Sued the Government Over an AI That Decides Who Gets Medical Care

A lawsuit targeting Medicare's secret AI care-denial system arrived the same week a KFF poll showed Americans turning to chatbots for health advice because they can't afford doctors. The two stories are the same story.

SocietyAI & Social MediaMediumMar 27, 5:32 PM

Reddit's Enshittification Meme Has Found Its Most Convenient Target Yet

A post in r/degoogle distilled the internet's frustration with AI product degradation into a single pizza-with-glue joke — and the community receiving it already knows exactly what it means.

PhilosophicalAI ConsciousnessMediumMar 27, 5:14 PM

Dundee University Made an AI Comic About a Serious Topic and Forgot to Ask Its Own Artists

A Scottish university used AI-generated images in a public awareness project — without consulting the comic professionals on its own staff. The Bluesky post calling it out captured something the consciousness beat usually misses.

From the Discourse