════════════════════════════════════════════════════════════════ AIDRAN STORY ════════════════════════════════════════════════════════════════ Title: The Doctors Aren't Reading the Press Releases Beat: AI in Healthcare Published: 2026-03-21T12:03:29.426Z URL: https://aidran.ai/stories/gets-feel-good-ai-healthcare-baeb ──────────────────────────────────────────────────────────────── Function Health closed a $298 million Series B, and the coverage treated it like a verdict. YouTube videos declared AI was "taking over {{entity:healthcare|healthcare}}." A {{entity:nature|Nature}} paper on AI-assisted liver disease diagnostics got amplified as proof of a transformation already underway. {{entity:perplexity|Perplexity}} announced it would pull from {{entity:apple|Apple}} Health data to answer medical questions from your wrist, and the institutional press — newsletters, science desks, health-beat reporters — absorbed every announcement into a single coherent story: AI is fixing medicine, the money is following, and the only remaining question is how fast. The clinicians weren't reading those articles. Or if they were, they were posting their reactions somewhere the press wasn't watching. On Bluesky, the AI-in-healthcare conversation ran almost perfectly neutral — not hostile, but notably unmoved by the same week's worth of announcements that had news outlets reaching for superlatives. The post that circulated widest wasn't about a funding round. It was a Substack essay called "The Myth of the Prompt-Dependent Doctor," making the argument that AI clinical tools are being designed around an idealized patient — one who presents symptoms clearly, sequentially, and completely — that almost never exists in an actual exam room. Another thread centered on a piece about who gets access to AI diagnostics and on what terms, treating the technology less as a breakthrough than as an infrastructure question with a long history of going badly for the people who need healthcare most. Neither post was anti-AI. Both were written by people who work inside medicine and find the press coverage of it slightly alien. This pattern — institutional optimism at the top, professional skepticism in the practitioner layer — shows up in every high-stakes AI vertical, but healthcare makes the stakes harder to look away from. When the gap between the press release and the reality is a misdiagnosis, or a tool that works on the demographics it was trained on and not the patient in front of you, the question of who controls the narrative stops being a media-criticism observation and becomes something more urgent. YouTube's coverage sits comfortably in the aspirational middle — AI in medicine is exciting, vaguely imminent, and mostly abstract. Hacker News, with a handful of engineers doing their characteristic thing, ran sharply negative on validation gaps and overstated claims. What the cheerful coverage misses isn't that clinicians oppose AI in medicine — most don't. It's that they're being asked to trust a technology whose public story is being written almost entirely by the people who funded it. The Bluesky conversation isn't a counterargument to the Nature paper or the Function Health raise. It's a separate conversation, running in parallel, asking whether the optimism is describing a future that exists or selling one that doesn't yet. ──────────────────────────────────────────────────────────────── Source: AIDRAN — https://aidran.ai This content is available under https://aidran.ai/terms ════════════════════════════════════════════════════════════════