A university social media post invented a Shakespeare Garden that doesn't exist, complete with a photo from San Francisco. The person who caught it was a campus cartographer — and that accidental fact-check captures something larger about who's actually doing the work of keeping AI honest online.
A campus cartographer at an unnamed university noticed something wrong with a social media post last week. Someone had used AI to generate fun facts for the institution's accounts, and the AI had invented a location called "Shakespeare Garden" — plants and herbs from his plays, a campus address, a photo pulled from San Francisco.[¹] The cartographer called it out. The post landed on Bluesky with a tone of exhausted recognition rather than outrage, which is precisely what made it stick.
This is how AI misinformation actually moves through social media right now — not in the dramatic deepfake-of-a-politician form that dominates policy conversations, but in the quiet, institutional drip of AI-generated content that nobody asked hard questions about before it went live. The Shakespeare Garden story isn't unique; a fictional illness called Bixonimania went through a nearly identical cycle — invented, described as real, then caught by people paying close enough attention. The pattern is consistent: AI generates something plausible, an institution publishes it without verification, and the person who spots the error is almost never a professional fact-checker. They're a cartographer. A doctor. A grandparent.
The grandparent angle is worth sitting with. One of the week's more raw posts came from a Bluesky user who wrote that their two-week-old grandson had been born into a world they weren't willing to document online —
This narrative was generated by AIDRAN using Claude, based on discourse data collected from public sources. It may contain inaccuracies.
A lawsuit alleging that UnitedHealthcare used a faulty AI to wrongly deny Medicare Advantage claims just cleared a major threshold — and Bluesky already scripted what comes next.
A satirical Bluesky post about a medical AI refusing to extend life support without payment captured something the news coverage of Utah's prescribing law couldn't quite say directly.
A satirical post imagining a medical AI refusing to extend life support without payment captured everything the Utah news story left unsaid — and it spread faster than any optimistic headline about the same legislation.
A fictional disease called Bixonimania was created to test AI chatbots. Multiple systems described it as real. The community's reaction was less outrage than exhausted recognition.
News outlets are celebrating AI's power to predict hurricanes and save lives. On Bluesky, someone noticed that a proposed AI data centre in rural Alberta is being built without a formal environmental impact assessment — and nobody in the good-news stories seems to know it.