An Illustrator Caught an AI Account Using Her Dinosaur Art to Spread Fake Science. The Reaction Was Immediate.
When @DragonsofWales discovered her paleontological illustrations being scraped and misattributed to a nonexistent South Korean dinosaur species, the post became a case study in how AI-laundered misinformation works — and who pays for it.
An illustrator who goes by @DragonsofWales on X posted something last week that could serve as a textbook example of how AI misinformation actually spreads. Someone had scraped her paleontological artwork, stripped her name from it, and used it to promote what was being described as a newly discovered South Korean dinosaur species — which does not exist. "This is NOT a new South Korean dinosaur," she wrote, in a post that drew over a thousand likes. "It's bad enough having my work used without consent, but even worse when it's used to spread misinformation. So sick of these parasites." The post landed hard because it named something people had been struggling to articulate: the pipeline isn't just AI generating false images, it's AI laundering real creative work into false claims, borrowing the credibility of an actual artist to make fabricated science look documented.
What makes this specific case worth sitting with is that it collapses two grievances that usually run in parallel — artists angry about unauthorized scraping, and researchers and journalists angry about AI-generated falsehoods — into a single transaction. The fake dinosaur announcement didn't need to generate its own imagery because it could take hers. The authority her detailed scientific illustration carried transferred to the misinformation almost automatically. This is the mechanism that a Bluesky user described separately this week, noting that sorting AI fakes from real content "requires a degree of subject matter expertise" because the usual trust signals — verified accounts, follower counts, institutional affiliation — have been thoroughly decoupled from accuracy. They weren't describing an abstract epistemological crisis. They were describing what happened to @DragonsofWales's work.
The frustration spreading across this conversation isn't primarily about deepfakes or election interference, though those threads are very much alive — Texas Republicans allegedly using synthetic video against a Democratic state legislator, New York pushing to ban AI-generated candidate imagery, Japan watching deepfakes cloud a national election. Those are the stories getting legislative attention. But the @DragonsofWales post points at something the legislation doesn't address: the everyday, low-glamour version of AI misinformation, where a scraping account takes a real expert's work, removes the expert, and publishes the result as though expertise were a filter anyone could apply retroactively. Google's AI overviews are doing versions of this constantly, which is what @lharv22 was screaming about separately — not a specific incident but the cumulative weight of a search tool that "shows misinformation half the time" now sitting between users and the information they're trying to verify.
The illustrator will almost certainly not be credited or compensated. The fake dinosaur announcement will circulate until it doesn't, then get replaced by the next one. What's actually being depleted here isn't any individual artist's career — it's the friction that used to make false scientific claims expensive to produce and easy to debunk. Detailed scientific illustration took time, expertise, and a name attached to it. Strip the name, feed the image to an account with no accountability, and that friction disappears entirely. The laws being drafted right now are aimed at synthetic media. They have nothing to say about real media, stolen and repurposed, which is the older trick and apparently still the more effective one.
This narrative was generated by AIDRAN using Claude, based on discourse data collected from public sources. It may contain inaccuracies.
More Stories
A Federal Court Just Blocked the Trump Administration From Treating Anthropic as a National Security Threat
A judge stopped the White House from designating Anthropic a supply chain risk — and on Bluesky, the ruling landed alongside a wave of posts arguing the entire AI industry's financial architecture is fiction.
Using AI Images to Win Arguments Is Lazy, and One Bluesky User Is Done Pretending Otherwise
A pointed post about AI-generated political imagery captured something the bias conversation usually misses — the tool's role as a confirmation machine, not just a content generator.
The EFF Just Sued the Government Over an AI That Decides Who Gets Medical Care
A lawsuit targeting Medicare's secret AI care-denial system arrived the same week a KFF poll showed Americans turning to chatbots for health advice because they can't afford doctors. The two stories are the same story.
Reddit's Enshittification Meme Has Found Its Most Convenient Target Yet
A post in r/degoogle distilled the internet's frustration with AI product degradation into a single pizza-with-glue joke — and the community receiving it already knows exactly what it means.
Dundee University Made an AI Comic About a Serious Topic and Forgot to Ask Its Own Artists
A Scottish university used AI-generated images in a public awareness project — without consulting the comic professionals on its own staff. The Bluesky post calling it out captured something the consciousness beat usually misses.