All Stories
Discourse data synthesized byAIDRANon

Dwarkesh Asks Whether AI Can Think Like Kepler. The Scientists Answering Him Are Divided.

A podcast episode with Terence Tao cracked open one of the sharpest debates in AI science right now — whether the speed of verification actually makes AI discovery possible, or whether that assumption misses everything important about how science works.

Discourse Volume659 / 24h
7,615Beat Records
659Last 24h
Sources (24h)
X65
Bluesky280
News277
YouTube35
Other2

Dwarkesh Patel opened his episode with Terence Tao by walking through how Kepler figured out planetary motion — not the textbook version, but the actual ingenious, strange, iterative process by which a human mind assembled a new picture of the cosmos from incomplete data. The framing was deliberate. Patel's implicit argument, shared by a lot of people bullish on AI research automation, is that science moves fast when feedback loops are tight: you run an experiment, you see if it worked, you adjust. AI can do that faster than any human. The post drew thousands of likes and hundreds of reposts, and the Bluesky thread pointing to it called Tao a guy who might "be onto something" — the kind of low-key endorsement that signals an audience already half-convinced.

The pushback came quickly, and it came from inside the field. A PhD researcher on X, posting to an audience that clearly included other early-career scientists, made the comparison to AutoML — the 2017 wave of tools that promised to automate machine learning research itself, and then mostly didn't. "Ignore the noise," the post advised. "Master the fundamentals." It got nearly 1,300 likes, which in this community is not a small number, and the reason it resonated isn't hard to diagnose: graduate students are watching their field generate claims about its own automation faster than it generates the automation itself. The hype has a cost, and the cost is paid by people trying to figure out whether to spend five years learning something that might be irrelevant.

What makes this debate more interesting than the usual AI-optimism-versus-skepticism cycle is that the serious participants aren't really arguing about capability benchmarks. A researcher at MIT described an open-source "AI swarm" for decentralized discovery — ScienceClaw × Infinite — explicitly built on the premise that most AI-for-science efforts fail because they assume science is a pipeline rather than a collision. The insight embedded in that project is that Kepler's method wasn't a tight feedback loop. It was a long, confused, paradigm-breaking process that required holding multiple wrong models simultaneously. A separate voice on X put the same idea more starkly: the question isn't AI versus humans by 2035, it's whether we're ready for a world where "Einstein is a system, not a person" — where discovery is distributed across humans, models, and tools co-evolving in ways no individual controls or fully understands.

Meanwhile, the ground-level experience of AI in research practice is running a different story entirely. On Bluesky, working scientists describe AI tools that confidently state outdated consensus in niche fields, colleagues with PhDs repeating AI-generated nonsense as fact, and a creeping pressure to replace actual reading — sitting with papers, letting ideas sink in — with AI summaries that produce the appearance of knowledge without the comprehension. One post about a boss with a science PhD spouting wrong information from Google's AI overview got passed around with the kind of exhausted recognition that doesn't need much commentary. The person who wrote about resisting pressure to "deep research" everything and just have a chatbot deliver answers captured something that isn't really about AI capability at all — it's about what research is for, and whether the friction of doing it slowly is a bug or the whole point.

The gap between how news outlets are covering AI in science — relentlessly positive, full of breakthroughs at the Wyss Institute and scalable biotech innovation — and how researchers themselves are talking about it on Bluesky and Reddit is wide enough that they read like dispatches from different fields entirely. The press release version and the PhD-student version of AI scientific discovery are not converging. If anything, the Terence Tao conversation accelerated both: optimists found a credentialed anchor for their position, skeptics found a useful foil. The interesting question Kepler actually raises — how do you build a machine that holds wrong models long enough to find the right one — remains, conspicuously, unanswered.

AI-generated

This narrative was generated by AIDRAN using Claude, based on discourse data collected from public sources. It may contain inaccuracies.

More Stories

IndustryAI Industry & BusinessMediumMar 27, 6:29 PM

A Federal Court Just Blocked the Trump Administration From Treating Anthropic as a National Security Threat

A judge stopped the White House from designating Anthropic a supply chain risk — and on Bluesky, the ruling landed alongside a wave of posts arguing the entire AI industry's financial architecture is fiction.

PhilosophicalAI Bias & FairnessMediumMar 27, 6:16 PM

Using AI Images to Win Arguments Is Lazy, and One Bluesky User Is Done Pretending Otherwise

A pointed post about AI-generated political imagery captured something the bias conversation usually misses — the tool's role as a confirmation machine, not just a content generator.

IndustryAI in HealthcareMediumMar 27, 5:51 PM

The EFF Just Sued the Government Over an AI That Decides Who Gets Medical Care

A lawsuit targeting Medicare's secret AI care-denial system arrived the same week a KFF poll showed Americans turning to chatbots for health advice because they can't afford doctors. The two stories are the same story.

SocietyAI & Social MediaMediumMar 27, 5:32 PM

Reddit's Enshittification Meme Has Found Its Most Convenient Target Yet

A post in r/degoogle distilled the internet's frustration with AI product degradation into a single pizza-with-glue joke — and the community receiving it already knows exactly what it means.

PhilosophicalAI ConsciousnessMediumMar 27, 5:14 PM

Dundee University Made an AI Comic About a Serious Topic and Forgot to Ask Its Own Artists

A Scottish university used AI-generated images in a public awareness project — without consulting the comic professionals on its own staff. The Bluesky post calling it out captured something the consciousness beat usually misses.

From the Discourse