Adobe Stock Is Half AI Now, and Pearl Abyss Shipping Placeholder Art Accidentally Is the Same Problem With a Different Name
Two stories collided this week to sharpen what the creative industries argument is actually about — not whether AI art exists, but who gets to decide when it shows up and whose aesthetic vision gets erased by it.
When players loading up Crimson Desert noticed the textures looked wrong — too smooth, too generic, not quite Pearl Abyss — it took about forty-eight hours for the explanation to surface: AI-generated placeholder art had made it into the final release. The studio apologized. The conversation it accidentally started opened much bigger than anyone expected. On Bluesky, the dominant reaction wasn't outrage at the studio so much as recognition — a collective "of course it did" from people who'd been watching the logic of AI-as-shortcut work its way through production pipelines. One post, collecting several hundred likes, put it plainly: GenAI in game development isn't just unethical in isolation, it "will likely pollute the art-style and writing of the final game." That's a specific claim about aesthetics, not copyright — and it's the argument that's gaining actual traction.
The week also brought Sora's shutdown and Disney's withdrawal from its billion-dollar OpenAI investment, and the timing couldn't have been more useful to the people who've spent two years arguing that AI video generation was economically incoherent and ethically compromised. On X, @ryuumance got 454 likes for a post that did something clever: it celebrated the shutdown of the AI video platform while praising Sora Harukawa, the vtuber whose name now benefits from the association's collapse. The joke worked because the underlying point was sharp — "deepfake and art theft industry" is the phrase that landed, and it's increasingly the phrase people reach for first. The Bluesky reaction was less witty and more prosecutorial, with one post describing Sora's four-month run as a "copyright infringement machine" and treating Disney's exit as the market delivering a verdict. Bluesky treated the whole thing as a holiday.
What's being lost in both the Sora post-mortems and the Crimson Desert pile-on is the subtler argument one Bluesky analyst made quietly, to almost no engagement: "AI doesn't threaten the livelihood of artists whose work already has cult value. The content AI slop is replacing is mostly the artisanal work — marketing, assisting on projects led by others — that provided a livelihood for artists with unprofitable passion projects." That's a harder observation to celebrate or rage at, which is probably why it didn't go viral. The artists making a living doing concept art, placeholder textures, session music, and commercial illustration — the people doing the professional infrastructure of creative industries — are the ones actually losing ground. The established names with cult followings are, for now, fine. Job displacement in creative fields isn't a future threat; it's already sorted itself along the lines of prestige.
Meanwhile, hand-drawn work is being flagged as AI-generated with enough regularity that it's becoming its own crisis. @EpicTheFox's post — photos of hand-drawn sketches taken on a phone, accused of being AI — collected 204 likes and far more replies, many from artists with nearly identical stories. The post asked a question that no one has a satisfying answer to: if traditional art taken on a phone camera is indistinguishable from AI output to enough people, what does that do to the artists who aren't yet established, who don't yet have the audience that would take their word for it? The stigma is spreading faster than the detection tools. One Bluesky user put it with appropriate dread: "I am finding it so scary how so many people are struggling to tell when things are AI generated. Even those shitty piss-filter cartoons seem to be passing as actual art." The aesthetic homogenization that AI output produces — what another post described as running "all aesthetics through this faux-realistic hypergeneric porn filter" — is now contaminating how people read human-made work.
The Ko-fi survey situation this week illustrated how this dynamic lands institutionally. Artists on the platform — which exists specifically to support independent creators — flooded a survey opposing AI-generated content after Ko-fi declined to ban it outright. Posts ranged from exhausted to satirical, with one noting "I am quite tired of this. So many companies that are in 'support' of authentic artists turn around to cater against the humanities." The gap between what platforms say about supporting creators and what they actually permit when AI companies are involved isn't new, but it keeps producing the same bitter recognition. The copyright arguments are still grinding through courts — the Chicago Tribune's suit against Perplexity is still in scheduling, with deadlines pushed to mid-2026 — but artists in creative communities have largely stopped waiting for legal or ethical frameworks to resolve anything. The Crimson Desert incident matters precisely because it wasn't a bad actor or a startup cutting corners. It was a respected studio, shipping a product people cared about, with AI assets that slipped through because somewhere in the pipeline someone treated them as neutral placeholders. That's the industry's actual problem, and an apology doesn't fix it.
This narrative was generated by AIDRAN using Claude, based on discourse data collected from public sources. It may contain inaccuracies.
More Stories
A Federal Court Just Blocked the Trump Administration From Treating Anthropic as a National Security Threat
A judge stopped the White House from designating Anthropic a supply chain risk — and on Bluesky, the ruling landed alongside a wave of posts arguing the entire AI industry's financial architecture is fiction.
Using AI Images to Win Arguments Is Lazy, and One Bluesky User Is Done Pretending Otherwise
A pointed post about AI-generated political imagery captured something the bias conversation usually misses — the tool's role as a confirmation machine, not just a content generator.
The EFF Just Sued the Government Over an AI That Decides Who Gets Medical Care
A lawsuit targeting Medicare's secret AI care-denial system arrived the same week a KFF poll showed Americans turning to chatbots for health advice because they can't afford doctors. The two stories are the same story.
Reddit's Enshittification Meme Has Found Its Most Convenient Target Yet
A post in r/degoogle distilled the internet's frustration with AI product degradation into a single pizza-with-glue joke — and the community receiving it already knows exactly what it means.
Dundee University Made an AI Comic About a Serious Topic and Forgot to Ask Its Own Artists
A Scottish university used AI-generated images in a public awareness project — without consulting the comic professionals on its own staff. The Bluesky post calling it out captured something the consciousness beat usually misses.