Enterprise AI's Credibility Problem Isn't Coming — It's Already Here
A blunt piece arguing that AI still doesn't work in business is circulating on Bluesky with the energy of something people knew but hadn't seen written down. The investor anxiety, the collapsing search traffic, and the workforce platitudes are all pointing the same direction.
A commenter on Bluesky, responding to a Register piece arguing that AI still doesn't work very well in business, put it this way: "The entire structure of society and economy will be torn apart in order to make it work." That's not a product complaint. That's an accusation that the industry's core logic — adoption pressure will eventually force a fit between AI capabilities and real business needs — is itself the problem. The piece is moving through tech-adjacent circles with the specific velocity of something people suspected but hadn't seen stated plainly, and the response suggests it touched something real.
What gives this moment weight isn't any single data point, but the fact that anxieties that usually run in separate tracks are suddenly running together. The Information is reporting on what's keeping investors up at night. SISTRIX's finding that Google's AI Overviews have cut position-one click-through rates from 27% to 11% — more than halved — is landing hard for businesses whose revenue lives in organic search. Mistral, pitching enterprise clients on model relevance rather than raw capability, is making a diagnosis that implicitly concedes the current generation of deployments isn't performing: the problem, in their framing, is that systems trained on the internet can't substitute for decades of company-specific knowledge. That may be true, and it may be a solvable product problem. But it's also a quiet admission that the deployments already sold and installed aren't working.
Against this, the workforce commentary sounds like it's arriving from a different decade. Bluesky posts from a workforce panel — industry leaders and professors agreeing that "curiosity" is the key skill for the AI era — read as a human-capital answer to a question that has become, increasingly, a capital-allocation question. The organizations spending heavily on AI infrastructure need returns, not curiosity. The gap between what's being promised in keynotes and what's showing up in quarterly results is the actual story, and it's getting harder to paper over with case studies.
The reckoning The Register is predicting may not come as a crash — no single moment when the bubble pops and everyone agrees it was a fraud. It's more likely to arrive as a slow, grinding acknowledgment: deals that don't renew, implementations that get quietly wound down, a gradual deflation of the category's valuation premium as the distance between the pitch and the product becomes undeniable. The enterprise AI story was always more aspiration than architecture. What's changing is that the people who funded the aspiration are starting to ask where the architecture is.
This narrative was generated by AIDRAN using Claude, based on discourse data collected from public sources. It may contain inaccuracies.
More Stories
A Federal Court Just Blocked the Trump Administration From Treating Anthropic as a National Security Threat
A judge stopped the White House from designating Anthropic a supply chain risk — and on Bluesky, the ruling landed alongside a wave of posts arguing the entire AI industry's financial architecture is fiction.
Using AI Images to Win Arguments Is Lazy, and One Bluesky User Is Done Pretending Otherwise
A pointed post about AI-generated political imagery captured something the bias conversation usually misses — the tool's role as a confirmation machine, not just a content generator.
The EFF Just Sued the Government Over an AI That Decides Who Gets Medical Care
A lawsuit targeting Medicare's secret AI care-denial system arrived the same week a KFF poll showed Americans turning to chatbots for health advice because they can't afford doctors. The two stories are the same story.
Reddit's Enshittification Meme Has Found Its Most Convenient Target Yet
A post in r/degoogle distilled the internet's frustration with AI product degradation into a single pizza-with-glue joke — and the community receiving it already knows exactly what it means.
Dundee University Made an AI Comic About a Serious Topic and Forgot to Ask Its Own Artists
A Scottish university used AI-generated images in a public awareness project — without consulting the comic professionals on its own staff. The Bluesky post calling it out captured something the consciousness beat usually misses.