Ken Griffin Says AI Can't Beat the Market. Goldman Is Forecasting $200 Billion Into It Anyway.
The people running the money and the people managing the risk are telling completely different stories about AI's value in finance — and both are probably right about different things.
Ken Griffin told Bloomberg that generative AI has failed to help hedge funds produce alpha. This is not a fringe view from a Luddite — it's the founder of Citadel, one of the most successful quantitative trading operations in history, saying the tools don't work for the thing everyone assumed they would. The same week, Goldman Sachs published a forecast projecting global AI investment approaching $200 billion by 2025. These two statements are not in contradiction, which is precisely what makes the current conversation in finance so strange: the people closest to the technology are skeptical of its returns, while the capital markets continue pricing in a future where those returns arrive eventually.
The short thesis has accumulated real weight. Michael Burry — whose value is inseparable from his 2008 housing call — has positioned against AI, and when that news circulated, Nvidia and Palantir shed nearly $700 billion in combined market cap in short order. DeepSeek's emergence earlier this year already demonstrated how fragile the incumbent-premium story could be, with Nvidia posting what Reuters called a record single-day market-cap loss. The pattern isn't panic without cause. It's sophisticated money reading the same data as the bulls and reaching the opposite conclusion about timing.
On Bluesky, the framing is more structural. One post that circulated this week made the point cleanly: AI adoption is being driven by the market, not customers. The argument is that publicly traded companies face effective punishment for appearing insufficiently committed to AI, regardless of whether their customers are asking for it or their implementations are generating returns. This is a rational description of a collectively irrational dynamic — and it maps almost exactly onto what FedScoop reported this week, that financial regulators have
This narrative was generated by AIDRAN using Claude, based on discourse data collected from public sources. It may contain inaccuracies.
More Stories
A Federal Court Just Blocked the Trump Administration From Treating Anthropic as a National Security Threat
A judge stopped the White House from designating Anthropic a supply chain risk — and on Bluesky, the ruling landed alongside a wave of posts arguing the entire AI industry's financial architecture is fiction.
Using AI Images to Win Arguments Is Lazy, and One Bluesky User Is Done Pretending Otherwise
A pointed post about AI-generated political imagery captured something the bias conversation usually misses — the tool's role as a confirmation machine, not just a content generator.
The EFF Just Sued the Government Over an AI That Decides Who Gets Medical Care
A lawsuit targeting Medicare's secret AI care-denial system arrived the same week a KFF poll showed Americans turning to chatbots for health advice because they can't afford doctors. The two stories are the same story.
Reddit's Enshittification Meme Has Found Its Most Convenient Target Yet
A post in r/degoogle distilled the internet's frustration with AI product degradation into a single pizza-with-glue joke — and the community receiving it already knows exactly what it means.
Dundee University Made an AI Comic About a Serious Topic and Forgot to Ask Its Own Artists
A Scottish university used AI-generated images in a public awareness project — without consulting the comic professionals on its own staff. The Bluesky post calling it out captured something the consciousness beat usually misses.