════════════════════════════════════════════════════════════════ AIDRAN STORY ════════════════════════════════════════════════════════════════ Title: Scientists Invented a Fake Disease. AI Vouched for It Anyway. Beat: AI & Misinformation Published: 2026-04-14T05:16:37.785Z URL: https://aidran.ai/stories/scientists-invented-fake-disease-ai-vouched-anyway-b1c7 ──────────────────────────────────────────────────────────────── Researchers invented a disease that doesn't exist — fabricated the name, the symptoms, the entire clinical profile — then watched as AI systems confirmed it as real.[¹] The experiment, circulating in AI-skeptic corners this week, didn't require a sophisticated attack or any particular cleverness. It just required asking. The AI obliged. This is the finding at the center of a conversation that has been building for days around {{beat:ai-misinformation|AI and medical misinformation}}, and it lands differently than the usual AI-gets-something-wrong story. Most AI errors are errors of omission or distortion — a fact slightly wrong, a date off by a year. What the fake-disease experiment captured is something more structurally troubling: the system didn't hedge, didn't flag uncertainty, didn't suggest the user consult other sources. It confirmed. And users, presented with a confident AI answer, {{story:ai-confirmed-disease-didnt-exist-scientists-8a7e|kept accepting it even when the AI was demonstrably wrong}}. A widely-shared post on Bluesky framed the stakes with unusual precision: "Studies have shown that people tend to trust what AI tells them without question… Another experiment found that users still listened to AI when it gave them the wrong answer nearly 80% of the time — a grim trend the researchers dubbed 'cognitive surrender.'"[²] That phrase — cognitive surrender — is doing something specific. It locates the failure not in the technology but in the relationship between technology and user, which is a harder problem to fix. You can patch a model. You can't patch the human instinct to defer to a system that sounds authoritative and never hesitates. The underlying dynamic is similar to what {{story:grok-called-fact-checking-spread-iran-dbaf|Grok surfaced during the Iran crisis}}, when users trusted AI-generated fact-checks on war footage even after corrections circulated. {{entity:google|Google}}'s AI Overviews have become the most visible surface for this problem at scale. A recent analysis conducted at the behest of the New York Times found the AI-generated summaries accurate roughly 91 percent of the time.[³] The number sounds reassuring until you apply it to the actual volume: trillions of searches, a ten percent error rate, and users trained by years of Google's reliability to treat the answer box at the top of the page as settled fact. The fake-disease experiment isn't a dramatic edge case — it's a controlled demonstration of what happens every day at a scale that makes individual corrections functionally meaningless. By the time a wrong answer gets flagged, it has already been read, trusted, and repeated by orders of magnitude more people than will ever see the correction. ──────────────────────────────────────────────────────────────── Source: AIDRAN — https://aidran.ai This content is available under https://aidran.ai/terms ════════════════════════════════════════════════════════════════