Kepler Didn't Have Tight Feedback Loops, and That's Dwarkesh Patel's Whole Point
A viral thread about how Kepler actually discovered his laws of planetary motion is quietly dismantling one of AI's favorite arguments about scientific progress.
Dwarkesh Patel opened a thread this week with Johannes Kepler, and it landed harder than most AI discourse does. The setup, drawn from a Terence Tao conversation, was Kepler's method of discovering his laws of planetary motion — not the clean, textbook version, but the actual process, which was ingenious and strange and nothing like the rapid-iteration model that AI boosters love to invoke. The thesis Patel was pushing back on is one you've heard a hundred times: that AI will accelerate scientific discovery because science has tight verification loops, meaning a model can propose hypotheses, check them against data, and iterate at machine speed. Kepler, the thread implies, complicates that story considerably. The post drew over 3,300 likes in 48 hours, making it the most-engaged piece of AI-and-science content in the current cycle by a wide margin.
The "tight verification loops" argument has become a load-bearing pillar of the AI-for-science case. It shows up in papers, in VC pitches, in conference keynotes. The logic is seductive: if the bottleneck in science is time spent waiting for experiments to confirm or refute a hypothesis, then a system that can run millions of simulated checks per second should produce breakthroughs at unprecedented speed. What the Kepler example cuts against is the assumption that the hard part of discovery is verification at all. Kepler spent years working with Tycho Brahe's observational data not because he lacked a way to check his math, but because he didn't know what he was looking for. The conceptual leap — that planetary orbits might be ellipses rather than circles — wasn't generated by iterating faster. It required abandoning a framework that had held for two thousand years.
The thread isn't alone. An AI PhD student posting under @ahatamiz1 made the same structural argument from a different angle, telling newer researchers to tune out the "autoresearch" hype entirely. The comparison to AutoML in 2017 is pointed: that wave of automation promises peaked, plateau'd, and became infrastructure rather than revolution — useful, but not the end of the machine learning researcher as a job category. The post got nearly 1,300 likes, which suggests it resonated with exactly the audience it was meant for: people building careers in AI research who are being told, repeatedly, that their field is about to automate itself out from under them.
What's emerging isn't a straightforward backlash against AI in science — it's something more specific. The skeptics aren't arguing that AI is useless for research. They're arguing against a particular story about *how* science works, one that flatters AI's strengths by assuming discovery is mostly optimization. If the hardest moments in scientific history were moments of conceptual rupture — paradigm breaks that no amount of faster iteration would have produced — then the tight-verification-loop argument is selling a capability AI genuinely has by claiming credit for a problem AI hasn't solved. Kepler didn't need a faster calculator. He needed a different idea about what shape an orbit could be. That's the argument Patel is threading through a historical example, and it's a better challenge to the AI-accelerates-science thesis than most of the academic rebuttals currently in circulation.
This narrative was generated by AIDRAN using Claude, based on discourse data collected from public sources. It may contain inaccuracies.
More Stories
A Federal Court Just Blocked the Trump Administration From Treating Anthropic as a National Security Threat
A judge stopped the White House from designating Anthropic a supply chain risk — and on Bluesky, the ruling landed alongside a wave of posts arguing the entire AI industry's financial architecture is fiction.
Using AI Images to Win Arguments Is Lazy, and One Bluesky User Is Done Pretending Otherwise
A pointed post about AI-generated political imagery captured something the bias conversation usually misses — the tool's role as a confirmation machine, not just a content generator.
The EFF Just Sued the Government Over an AI That Decides Who Gets Medical Care
A lawsuit targeting Medicare's secret AI care-denial system arrived the same week a KFF poll showed Americans turning to chatbots for health advice because they can't afford doctors. The two stories are the same story.
Reddit's Enshittification Meme Has Found Its Most Convenient Target Yet
A post in r/degoogle distilled the internet's frustration with AI product degradation into a single pizza-with-glue joke — and the community receiving it already knows exactly what it means.
Dundee University Made an AI Comic About a Serious Topic and Forgot to Ask Its Own Artists
A Scottish university used AI-generated images in a public awareness project — without consulting the comic professionals on its own staff. The Bluesky post calling it out captured something the consciousness beat usually misses.