Dwarkesh Patel Asked Terence Tao About AI and Scientific Discovery. The Answer Complicated Everything.
A viral thread about Kepler and tight verification loops has reignited an old fight about whether AI can actually do science — and a PhD student's warning about AutoML déjà vu is landing harder than the optimists expected.
Dwarkesh Patel posted a thread this week that started with Johannes Kepler and ended up somewhere much more unsettling. The setup: his interview with Terence Tao, one of the most celebrated mathematicians alive, opens with the story of how Kepler discovered his laws of planetary motion — a method Patel describes as "absolutely ingenious and surprising." The observation Patel attaches to this is the one that lit up X: people often claim AI will accelerate scientific discovery because of tight verification loops, the idea being that if a system can quickly check whether its output is right or wrong, it can iterate at machine speed. But the Kepler story — a man who spent years reasoning through ellipses he couldn't verify with modern tools, guided by something closer to aesthetic conviction than algorithmic feedback — quietly undermines the premise. The thread got thousands of likes, and almost none of the replies pushed back on the framing. That's the interesting part.
The Kepler thread landed in a conversation that was already primed to receive it. Hours earlier, a researcher named Ahatamiz had posted a pointed message to AI PhD students starting their careers right now: don't let the "autoresearch" hype discourage you, because it won't happen. The comparison he reaches for is AutoML — the wave of tools and papers from around 2017 that promised to automate machine learning pipeline design and largely didn't deliver what the headlines claimed. "Ignore the noise. Master the fundamentals." It's a mentor's advice, and it got nearly 1,300 likes, which in the context of AI-skeptical posting on X is a signal of real resonance. The people liking it aren't just disaffected academics — they're early-career researchers who've watched one hype cycle complete its arc and are watching a new one begin.
What makes this moment worth paying attention to isn't that skeptics exist — they always do — but that the skeptical framing has shifted. The old argument against AI in science was about capability: can these models actually reason, or are they just pattern-matching? That debate still runs in the background. The newer argument, the one Patel's Kepler example gestures toward without quite stating, is about epistemology: even if AI systems get very good at optimization within known problem structures, does that constitute discovery? Kepler wasn't running a tight verification loop. He was wrong for years in productive ways. @QuanquanGu, replying in the thread's orbit, tried to reframe it — the question isn't AI versus humans by 2035, it's whether we're ready for a world where "Einstein is a system, not a person." It's a sharp line, but it sidesteps what Tao and Patel seem to be probing: whether the kind of insight that reshapes a field's foundations can be engineered at all, or whether it requires something more like the productive wrongness of a 16th-century astronomer with a telescope and a hunch.
The news ecosystem covering this conversation is running warmer than the researchers having it — which is a familiar pattern, but worth naming plainly. The gap between institutional science communication and the people inside those institutions has been widening on this question for two years, and threads like Patel's keep reopening it because they refuse the comfortable conclusion in either direction. The AutoML comparison will stick because it's falsifiable in a way that enthusiasm about paradigm shifts isn't. If autoresearch tools in 2030 look like AutoML in 2022 — useful in narrow contexts, oversold everywhere else — the PhD students who heeded Ahatamiz's advice will have made the right bet. The ones who restructured their careers around automation will not get that time back.
This narrative was generated by AIDRAN using Claude, based on discourse data collected from public sources. It may contain inaccuracies.
More Stories
A Federal Court Just Blocked the Trump Administration From Treating Anthropic as a National Security Threat
A judge stopped the White House from designating Anthropic a supply chain risk — and on Bluesky, the ruling landed alongside a wave of posts arguing the entire AI industry's financial architecture is fiction.
Using AI Images to Win Arguments Is Lazy, and One Bluesky User Is Done Pretending Otherwise
A pointed post about AI-generated political imagery captured something the bias conversation usually misses — the tool's role as a confirmation machine, not just a content generator.
The EFF Just Sued the Government Over an AI That Decides Who Gets Medical Care
A lawsuit targeting Medicare's secret AI care-denial system arrived the same week a KFF poll showed Americans turning to chatbots for health advice because they can't afford doctors. The two stories are the same story.
Reddit's Enshittification Meme Has Found Its Most Convenient Target Yet
A post in r/degoogle distilled the internet's frustration with AI product degradation into a single pizza-with-glue joke — and the community receiving it already knows exactly what it means.
Dundee University Made an AI Comic About a Serious Topic and Forgot to Ask Its Own Artists
A Scottish university used AI-generated images in a public awareness project — without consulting the comic professionals on its own staff. The Bluesky post calling it out captured something the consciousness beat usually misses.