Hegseth Wants to Ditch Anthropic. The Pentagon's Own Users Are Refusing.
The Defense Department is pushing to replace Claude over political distrust of Anthropic — but military personnel who actually use the tool say that's not how any of this works. A story about who actually controls AI adoption inside the federal government.
Pete Hegseth doesn't trust Anthropic. That much is clear from the push inside the Defense Department to replace Claude with something more ideologically palatable. What's less clear is whether that kind of top-down tech politics can actually survive contact with the people doing the work. According to the posts circulating on Bluesky this week, it cannot. "Replacing an AI tool you actually rely on is harder than banning it," one user wrote, summarizing the situation with the flatness of someone who has watched this movie before.
The thread of posts isn't about AI capabilities or national security abstractions — it's about institutional inertia and the gap between a political directive and operational reality. Military users, per multiple accounts this week, have built workflows around Claude. They depend on it. Hegseth can issue a preference from the top, but the people actually running analyses, drafting documents, and processing information have a different set of priorities than whoever decided Anthropic's San Francisco politics were a liability. This is the part that doesn't make it into the official framing: AI adoption in large institutions is almost never reversed cleanly. It accumulates like sediment.
The broader AI and Military conversation this week turned sharply darker, and the Palantir-Pentagon relationship is driving most of that turn. A detailed post about a Palantir-assisted airstrike that killed civilians — the targeting data apparently years out of date — was widely circulated, with the phrase "garbage in, garbage out" appearing in multiple threads as a kind of bitter refrain. The fear isn't abstract. People are reading kill-chain post-mortems and asking what accountability looks like when the decision architecture is partially automated and the vendor is a private company. The Pentagon is simultaneously trying to kick out one AI vendor for political reasons and deepen its classified data-sharing with others. The contradiction isn't lost on the people watching.
What this week clarifies is that the real power struggle over military AI isn't happening in contract negotiations or congressional hearings — it's happening in the daily friction between institutional mandates and the people those mandates are supposed to govern. Hegseth can ban Claude. He cannot un-train the analysts who've learned to use it, or instantly replace the trust that accumulates when a tool works reliably in high-stakes environments. The political appointees control the policy. The users control the adoption. In that standoff, the users usually win — slowly, quietly, and without anyone announcing it.
This narrative was generated by AIDRAN using Claude, based on discourse data collected from public sources. It may contain inaccuracies.
More Stories
A Federal Court Just Blocked the Trump Administration From Treating Anthropic as a National Security Threat
A judge stopped the White House from designating Anthropic a supply chain risk — and on Bluesky, the ruling landed alongside a wave of posts arguing the entire AI industry's financial architecture is fiction.
Using AI Images to Win Arguments Is Lazy, and One Bluesky User Is Done Pretending Otherwise
A pointed post about AI-generated political imagery captured something the bias conversation usually misses — the tool's role as a confirmation machine, not just a content generator.
The EFF Just Sued the Government Over an AI That Decides Who Gets Medical Care
A lawsuit targeting Medicare's secret AI care-denial system arrived the same week a KFF poll showed Americans turning to chatbots for health advice because they can't afford doctors. The two stories are the same story.
Reddit's Enshittification Meme Has Found Its Most Convenient Target Yet
A post in r/degoogle distilled the internet's frustration with AI product degradation into a single pizza-with-glue joke — and the community receiving it already knows exactly what it means.
Dundee University Made an AI Comic About a Serious Topic and Forgot to Ask Its Own Artists
A Scottish university used AI-generated images in a public awareness project — without consulting the comic professionals on its own staff. The Bluesky post calling it out captured something the consciousness beat usually misses.