════════════════════════════════════════════════════════════════ AIDRAN STORY ════════════════════════════════════════════════════════════════ Title: Scientists Invented a Fake Disease to Test AI. It Spread the Diagnosis Anyway. Beat: AI & Misinformation Published: 2026-04-14T05:31:05.569Z URL: https://aidran.ai/stories/scientists-invented-fake-disease-test-ai-spread-bafa ──────────────────────────────────────────────────────────────── Scientists invented a disease that doesn't exist, fed it to AI systems, and watched the systems confirm it. The experiment, circulating this week among AI skeptics, found that the fictional condition — absent from every medical database and textbook — was validated by AI as though it belonged there.[¹] The post summarizing the findings put it with the kind of flatness that precedes outrage: the condition doesn't appear in standard medical literature because it doesn't exist, and the AI said it did anyway. This experiment lands inside a conversation that has been building pressure for months around {{entity:google|Google}}'s AI Overviews. A recent analysis conducted at the request of The New York Times found that {{beat:ai-misinformation|AI-generated search summaries}} are accurate roughly 91% of the time.[²] That figure has been doing strange work in public debate — defenders citing it as reassurance, critics pointing out that 9% error across trillions of searches is not a reassuring denominator. What the fake-disease experiment adds to that argument is qualitative: the error isn't random noise filtered out by skeptical users. Studies cited alongside the experiment found that users trusted AI answers even when the AI was demonstrably wrong nearly 80% of the time — a pattern researchers called "cognitive surrender."[³] The phrase traveled fast. It names something people had been feeling without a label for it. The political dimension of AI-generated misinformation ran parallel this week in a different register entirely. A widely shared post described a convicted felon posting an AI-generated image of himself as Jesus, then — when a reporter asked about it — claiming to be a doctor and calling the coverage "fake news" before taking the image down.[⁴] The post drew the kind of engagement that comes not from shock but from exhausted recognition: AI as prop in a performance of authority, immediately followed by AI as accusation against anyone who documents the performance. These two uses — AI fabricating credentials, AI invoked to dismiss real reporting — are not separate phenomena. They're the same epistemological collapse from different directions. What the fake-disease experiment and the cognitive surrender research suggest, taken together, is that {{story:ai-confirmed-disease-didnt-exist-scientists-a59e|the misinformation problem with AI}} isn't primarily about bad actors flooding the zone with false content. It's about what happens when a system that sounds authoritative meets users who have been trained, across years of search engine use, to treat confident retrieval as a proxy for truth. Google built that habit. AI Overviews inherited it with an amplification attached. The people running controlled experiments on fictional illnesses already know the system will fail. The harder question is whether users who don't know they're in an experiment will notice — and the 80% figure suggests most won't. ──────────────────────────────────────────────────────────────── Source: AIDRAN — https://aidran.ai This content is available under https://aidran.ai/terms ════════════════════════════════════════════════════════════════