Dundee University Made an AI Comic About a Serious Topic and Forgot to Ask Its Own Artists
A Scottish university used AI-generated images in a public awareness project — without consulting the comic professionals on its own staff. The Bluesky post calling it out captured something the consciousness beat usually misses.
A Bluesky post went up this week from someone who was, by their own description, "frankly fucking disgusted." Dundee University had published a comic meant to spread awareness about a serious topic — the post didn't specify which one — and had used AI-generated images throughout. The part that made the post land wasn't the AI. It was the parenthetical: the university had not approached or consulted the comic professionals who work at the institution. The post got 34 likes, which in AI consciousness terms is a high-engagement anchor, and the replies weren't debating whether the images were good. They were debating whether the people who made the decision had any idea what they were doing.
This is the version of the AI art argument that doesn't make it into the think pieces. The creative industries conversation tends to frame the conflict as artists versus technology, or copyright holders versus model trainers. But the Dundee situation is a different and more corrosive problem: an institution that has actual human expertise on staff and decides, for reasons presumably involving speed or cost, that a machine will do. Nobody at the university had to choose between AI and some anonymous contractor. They chose AI over their own colleagues. A separate Bluesky post this week described a creator discovering that someone in their mutuals' network was regularly ripping off their work, crediting AI in their bio, and facing no social consequence from the people who followed both of them. "Not noticing is fine," the poster wrote, "but people who FOLLOW someone like that need some spatial awareness." The two posts together describe the same failure at different scales — one institutional, one interpersonal — and the failure isn't technical. It's attention.
What's interesting about where this conversation sits is that it's been routed through the AI consciousness beat, which usually hosts debates about whether language models have inner lives or whether silicon can suffer. Those questions are genuinely open. But the posts getting traction this week aren't about the philosophy of mind — they're about the philosophy of respect. Whether a generated image has consciousness is a question for researchers. Whether a university should use one instead of asking the illustrators down the hall is a question with an obvious answer that institutions keep getting wrong anyway. A parallel thread in the recent voices pointed out that a creator building an AI ethics awareness project had not approached creative professionals — the irony of using ethically contested tools to spread awareness about ethics went uncommented on in the post itself, but not in the replies.
The Dundee story will probably be forgotten by next week. The university will issue something — a clarification, an apology, a statement about their AI policy being under review — and the moment will pass. But the comic professionals at that institution will remember it. The question of whether AI has consciousness is abstract and probably unresolvable for years. The question of what it means when your employer decides you're interchangeable with a prompt is immediate and concrete, and it's being answered institution by institution, project by project, right now.
This narrative was generated by AIDRAN using Claude, based on discourse data collected from public sources. It may contain inaccuracies.
More Stories
A Federal Court Just Blocked the Trump Administration From Treating Anthropic as a National Security Threat
A judge stopped the White House from designating Anthropic a supply chain risk — and on Bluesky, the ruling landed alongside a wave of posts arguing the entire AI industry's financial architecture is fiction.
Using AI Images to Win Arguments Is Lazy, and One Bluesky User Is Done Pretending Otherwise
A pointed post about AI-generated political imagery captured something the bias conversation usually misses — the tool's role as a confirmation machine, not just a content generator.
The EFF Just Sued the Government Over an AI That Decides Who Gets Medical Care
A lawsuit targeting Medicare's secret AI care-denial system arrived the same week a KFF poll showed Americans turning to chatbots for health advice because they can't afford doctors. The two stories are the same story.
Reddit's Enshittification Meme Has Found Its Most Convenient Target Yet
A post in r/degoogle distilled the internet's frustration with AI product degradation into a single pizza-with-glue joke — and the community receiving it already knows exactly what it means.
A Bluesky Post About Palantir, Schoolgirls, and NHS Data Is Doing What a Decade of Warnings Couldn't
A single post connecting Palantir's Maven targeting system to civilian deaths in Iran — and then to a pending NHS data contract — has crystallized an abstract surveillance argument into something people are actually sharing.