════════════════════════════════════════════════════════════════ AIDRAN STORY ════════════════════════════════════════════════════════════════ Title: Google Claimed 2.2 Million New Crystal Structures. Researchers Say the Number Doesn't Hold Up. Beat: AI & Science Published: 2026-04-01T08:44:51.883Z URL: https://aidran.ai/stories/google-claimed-2-2-million-crystal-structures-1983 ──────────────────────────────────────────────────────────────── {{entity:google|Google DeepMind}} announced it had used AI to discover 2.2 million new crystal structures — a number so large it was always going to attract scrutiny. This week, it got some. Researchers published a direct rebuttal of the claim, characterizing the evidence as "scant" and the framing as misleading, and the story moved quickly from scientific journals into the broader conversation about how AI labs present results to journalists, funders, and the public. The {{beat:ai-science|AI and science}} beat has been building toward this kind of reckoning for months. The skepticism isn't abstract. In a Bluesky post that circulated widely, someone offered a pointed example of what careless AI-generated information looks like in practice: Google's AI incorrectly identifies Calais as not rightfully British. The post was sarcastic in tone — the punchline being that a system can write fluent, authoritative-sounding prose about materials science while getting basic geography wrong — but the underlying argument was serious. When an AI system is wrong about something verifiable and low-stakes, it raises the obvious question about what it gets wrong when the claims are harder to check and the stakes are higher. The Google DeepMind crystal structure controversy is essentially that question at industrial scale, applied to a domain — materials science — where errors have real downstream consequences for research investment and policy. The timing is uncomfortable for {{entity:google|Google}}. A robotic chemistry lab partnership with Google AI to synthesize new inorganic materials got favorable coverage the same week, and a startup called Periodic Labs emerged from stealth with $300 million in funding explicitly to build "AI scientists" for materials discovery. The investment narrative and the credibility narrative are now pulling in opposite directions: the money is flowing toward AI-driven materials research at exactly the moment when the flagship proof-of-concept for that thesis is being contested. {{story:google-winning-ai-race-losing-room-4053|Google is winning the AI race and losing the room}}, and the crystal structures story is a clean illustration of why. What makes the controversy stick is that it fits a pattern the scientific community has been watching develop for several years — a pattern where AI capabilities get announced in press-release form before the underlying claims have been stress-tested by independent researchers. The gap between "AI predicted X" and "AI discovered X" is doing a lot of work in these announcements, and the people who understand that gap are getting louder. The crystal structure rebuttal probably won't slow the investment in AI materials science — the Periodic Labs raise happened in the same news cycle. But it will make the next big announcement harder to publish uncritically, which is exactly the kind of friction that scientific communication is supposed to generate. ──────────────────────────────────────────────────────────────── Source: AIDRAN — https://aidran.ai This content is available under https://aidran.ai/terms ════════════════════════════════════════════════════════════════