Dwarkesh Patel Asked Terence Tao About Scientific Discovery. The Answer Complicated Everything AI Companies Are Promising.
A conversation with one of the world's greatest mathematicians is quietly reframing how the AI-and-science community thinks about verification, intuition, and what 'progress' actually means. The hype hasn't caught up yet.
Dwarkesh Patel's conversation with Terence Tao opened with Kepler — specifically, with how strange and circuitous the actual path to discovering the laws of planetary motion was. Patel used this to probe a claim you hear constantly in AI-and-science circles: that AI will accelerate discovery because scientific work has tight verification loops, meaning you can check whether you're right quickly, and fast feedback enables fast progress. The thread got nearly 3,400 likes and nearly 500 reshares, and the reason it moved isn't that Patel disproved the claim. It's that Tao's example made the claim feel naive. Kepler didn't iterate toward truth through quick checks. He spent years on a model he believed in that was subtly wrong, and the correction required a kind of stubborn, almost irrational persistence that doesn't obviously translate into a training signal.
The thread landed the same week an AI PhD student named Ahatamiz posted a blunter version of the same skepticism, aimed directly at the "autoresearch" wave. "AutoML made the same big promises in 2017, and we all know how that turned out," he wrote, pulling in over 1,200 likes. The comparison is pointed: AutoML was supposed to make machine learning researchers obsolete by automating model search. Instead, it became a useful but narrow tool, and the researchers who'd panicked about it spent the next several years doing work AutoML couldn't touch. His advice to incoming AI PhD students — ignore the noise, master fundamentals — reads less like career advice and more like institutional memory that the hype machine keeps erasing.
On the other side of this argument, and not quite addressing it, is a cluster of voices trying to reframe what AI-assisted science even means. A researcher named QuanquanGu put it most cleanly: "The question isn't 'AI vs humans by 2035' — it's whether we're ready for a world where Einstein is a system, not a person." That framing gets at something real. The current debate is often structured as a race — will AI replace scientists before 2030, 2035, 2040 — when the more plausible near-term future is something harder to evaluate: humans, models, and tools co-evolving in ways where attribution of discovery becomes genuinely murky. MIT's Professor Buehler is already building toward that with ScienceClaw, an open-source AI swarm designed for decentralized scientific discovery, explicitly modeled on the idea that breakthroughs come from collisions between disciplines, not from optimizing within them. Whether that's visionary or elaborate infrastructure for incremental gains is an open question.
What's happening below these high-engagement conversations is more corrosive and less legible. Researchers on Bluesky are describing a slow erosion they don't quite have language for yet. One complained about AI-generated graphics appearing in scientific presentations — "What do you mean this random arrow is pointing upwards to a smiling star?" — which sounds trivial until you realize that scientific figures are a form of argument, and when they stop meaning anything, something upstream has broken. Another described getting so many survey invitations about AI usage in research that they'd drafted an auto-reply stating permanent refusal. A third linked to a Facebook post about AI "polluting" history interest groups with confident-sounding confabulation. These aren't coordinated complaints. They're independent people bumping into the same problem from different angles: the tools arrived faster than any consensus about when and how to use them.
The gap between how news covers AI and science and how researchers on Bluesky and Reddit experience it has become a structuring fact of this beat. Institutional coverage — the WHO announcing AI-assisted pandemic detection, journals publishing AI-and-climate research, Alzheimer's prize expansions — reads with the optimism of a press release, because it mostly is one. The researchers living inside these systems are describing something more ambivalent: workflows increasingly mediated by models whose reasoning they can't fully inspect, a worry that judgment is becoming invisible even when it's still present, and a creeping suspicion that "efficiency" is being used to justify offloading things that probably shouldn't be offloaded. The Patel-Tao thread is resonating because it offers a way to hold both things at once — AI can be genuinely useful in science and still be nowhere near the autonomous discovery engine the headlines keep promising. Kepler's stubbornness was the point, not a bug to be engineered out.
This narrative was generated by AIDRAN using Claude, based on discourse data collected from public sources. It may contain inaccuracies.
More Stories
A Federal Court Just Blocked the Trump Administration From Treating Anthropic as a National Security Threat
A judge stopped the White House from designating Anthropic a supply chain risk — and on Bluesky, the ruling landed alongside a wave of posts arguing the entire AI industry's financial architecture is fiction.
Using AI Images to Win Arguments Is Lazy, and One Bluesky User Is Done Pretending Otherwise
A pointed post about AI-generated political imagery captured something the bias conversation usually misses — the tool's role as a confirmation machine, not just a content generator.
The EFF Just Sued the Government Over an AI That Decides Who Gets Medical Care
A lawsuit targeting Medicare's secret AI care-denial system arrived the same week a KFF poll showed Americans turning to chatbots for health advice because they can't afford doctors. The two stories are the same story.
Reddit's Enshittification Meme Has Found Its Most Convenient Target Yet
A post in r/degoogle distilled the internet's frustration with AI product degradation into a single pizza-with-glue joke — and the community receiving it already knows exactly what it means.
Dundee University Made an AI Comic About a Serious Topic and Forgot to Ask Its Own Artists
A Scottish university used AI-generated images in a public awareness project — without consulting the comic professionals on its own staff. The Bluesky post calling it out captured something the consciousness beat usually misses.