Crimson Desert Got Caught. The Apology Script Is Getting Old.
Pearl Abyss apologized for AI-generated art in Crimson Desert after players found it — but the creative industries have seen this exact sequence enough times that the apology now makes things worse, not better.
Pearl Abyss issued an apology this week after players found AI-generated assets in Crimson Desert, promising to replace them. The statement landed with a thud on Bluesky, where the pattern is now so familiar it has its own shorthand. "Oops, we're sorry we got caught, we'd never use AI to replace Real Art, that was just a placeholder we forgot about," one user wrote, not bothering to name the studio because it didn't need to be named. At least three other games have cycled through this exact sequence in recent memory. The apology is no longer accepted as closure — it's become evidence.
What's changed is the granularity of the outrage. Players aren't just angry about AI; they're documenting it, reporting it to Steam, screenshotting disclosure language that appeared only after they filed a complaint. One Bluesky user posted side-by-side evidence showing that a game's AI disclosure materialized the day after they reported it — and noted dryly that since the asset hadn't been "replaced through our production pipeline," it apparently met the studio's quality standards all along. The emoji they chose was a clown face. The community understood.
The legal front is moving on a parallel track, and the urgency is real. A Bluesky post with significant traction this week reminded artists that the deadline to file a claim in the Anthropic copyright class action is nine days out — the kind of reminder that suggests a lot of potential claimants still haven't acted. Meanwhile, the Second Circuit is hearing arguments over whether OpenAI violated the DMCA by stripping copyright management information — author names, titles, terms of use — from training data before ingestion. Record labels filed $350 million in lawsuits against AI music companies. The US Copyright Office has already held that AI output isn't copyright-eligible. The legal architecture around AI and creative work is being built in real time, mostly by people who are angry.
On X, the artist RedCleon drew a sharp social boundary that got nearly 500 likes: anyone who liked "altered, stolen artwork" should block them, because engagement with AI art is support for art theft. It's a maximalist position, but it reflects something genuine about how the community has started to think about complicity. Passive consumption of AI-generated work is being reframed as an active choice — a political one. That framing has spread enough that a Bluesky gamer mentioned avoiding AI-asset games and getting pushback telling them to just accept AI's inevitable improvement. Their response was blunt: "AI art has gotten steadily worse since Secret Horses." Whether or not that's technically accurate, it's a direct rejection of the determinism that AI advocates use to end the conversation before it starts.
The researchers publishing on Bluesky and arXiv are, by contrast, genuinely enthusiastic about what these tools can do — their posts carry the warmth of people working on problems they find interesting. That gap between the people building AI creative tools and the people whose livelihoods depend on creative work isn't narrowing. If anything, the Crimson Desert cycle — undisclosed use, player discovery, corporate apology, promises to replace — has made the professional creative community less interested in good-faith engagement with "but look what it can do." The question they're asking now isn't whether AI art is impressive. It's whether the companies using it ever intended to disclose it at all.
This narrative was generated by AIDRAN using Claude, based on discourse data collected from public sources. It may contain inaccuracies.
More Stories
A Federal Court Just Blocked the Trump Administration From Treating Anthropic as a National Security Threat
A judge stopped the White House from designating Anthropic a supply chain risk — and on Bluesky, the ruling landed alongside a wave of posts arguing the entire AI industry's financial architecture is fiction.
Using AI Images to Win Arguments Is Lazy, and One Bluesky User Is Done Pretending Otherwise
A pointed post about AI-generated political imagery captured something the bias conversation usually misses — the tool's role as a confirmation machine, not just a content generator.
The EFF Just Sued the Government Over an AI That Decides Who Gets Medical Care
A lawsuit targeting Medicare's secret AI care-denial system arrived the same week a KFF poll showed Americans turning to chatbots for health advice because they can't afford doctors. The two stories are the same story.
Reddit's Enshittification Meme Has Found Its Most Convenient Target Yet
A post in r/degoogle distilled the internet's frustration with AI product degradation into a single pizza-with-glue joke — and the community receiving it already knows exactly what it means.
Dundee University Made an AI Comic About a Serious Topic and Forgot to Ask Its Own Artists
A Scottish university used AI-generated images in a public awareness project — without consulting the comic professionals on its own staff. The Bluesky post calling it out captured something the consciousness beat usually misses.