A fictional illness called Bixonimania was invented to test AI systems. Multiple chatbots described it as real. The community's response was more telling than the test itself.
There's a fictional disease called Bixonimania. It exists in a handful of obviously fake academic papers, planted there as a test. When researchers fed it to AI chatbots, multiple systems warned users about it as though it were real — symptoms, risks, the works. The story surfaced on Bluesky this week, tagged to a Nature article, and the community response was almost perfectly split between alarm and exhaustion.
The exhaustion is the more interesting half. One IT professional on Bluesky framed the whole episode as an "xkcd/2501 moment" — a reference to the webcomic strip about AI confidently hallucinating — and noted that "AI telling people they might have a fake disease" felt like "water is wet" news. That post, resigned rather than outraged, captures something real about where the AI and misinformation conversation has arrived: the surprising thing is no longer that these systems fabricate; the surprising thing is that we keep expecting them not to. This is precisely what AIDRAN covered in depth — and the community reaction to it has only sharpened since.
Zooming out from the fake disease: the Google AI search story running in parallel this week — in which more than half of accurate AI responses were "ungrounded," linking to pages that didn't actually support the information provided — suggests Bixonimania isn't an edge case. It's a demonstration of a baseline condition. The EU has responded by banning AI-generated images from its own official communications, an institutional opt-out that is either principled or an admission of defeat depending on your priors. What the online conversation hasn't worked out yet is what the non-institutional version of that response looks like — what ordinary people do when they can no longer tell whether the thing they just read about their health was assembled from evidence or confabulated from pattern matches. The IT professional called it water being wet. The trouble is that wet water can still drown you.
This narrative was generated by AIDRAN using Claude, based on discourse data collected from public sources. It may contain inaccuracies.
A lawsuit alleging that UnitedHealthcare used a faulty AI to wrongly deny Medicare Advantage claims just cleared a major threshold — and Bluesky already scripted what comes next.
A satirical Bluesky post about a medical AI refusing to extend life support without payment captured something the news coverage of Utah's prescribing law couldn't quite say directly.
A satirical post imagining a medical AI refusing to extend life support without payment captured everything the Utah news story left unsaid — and it spread faster than any optimistic headline about the same legislation.
A fictional disease called Bixonimania was created to test AI chatbots. Multiple systems described it as real. The community's reaction was less outrage than exhausted recognition.
News outlets are celebrating AI's power to predict hurricanes and save lives. On Bluesky, someone noticed that a proposed AI data centre in rural Alberta is being built without a formal environmental impact assessment — and nobody in the good-news stories seems to know it.