A Stealth Clause Would Let the Pentagon Override AI Safety Rules. Anthropic Is Resisting.
Buried in federal contracting language, Trump officials are advancing a provision that would force AI companies to strip their own safeguards — and Dario Amodei is one of the few tech executives saying it out loud.
Buried inside a one-line change to federal contracting rules, the Trump administration is advancing a provision that would give the Pentagon effective veto power over AI safety protocols — compelling companies to build autonomous weapons systems and mass surveillance infrastructure regardless of their own ethical guidelines. The Lever News obtained the text. David Sirota's post flagging the story drew hundreds of retweets, and the phrase that traveled farthest was the bluntest: this isn't a policy debate about AI in warfare, it's a mechanism to make refusal legally and contractually untenable.
Anthopic's Dario Amodei is the most prominent tech executive to push back publicly, and the framing he's chosen is careful to the point of being almost diplomatic. AI has a legitimate role in military defense, he's argued — but mass surveillance and fully autonomous weapons require safeguards that can't simply be contracted away. The post from @unusual_whales summarizing his position drew over 800 likes, which is a lot of engagement for a position that amounts to "yes, but." What makes it notable is the context: most of Amodei's peers haven't said even that much. Palantir, meanwhile, doesn't need to be compelled. A Reuters memo reported this week that the Pentagon is adopting Palantir AI as a core military system — the company that spent years building the infrastructure for exactly this moment is now being handed the keys to it.
On Bluesky, where the mood has been running sharply negative for days, the conversation keeps returning to a specific incident: a girl's school in Iran, reportedly misidentified as a military target by AI systems, struck by U.S. forces. The post is unverified and the details are disputed, but its emotional weight is real — it crystallizes the argument that autonomous targeting isn't a hypothetical risk. "Who could have predicted inserting the 'move fast and break things' crew into military processes would end up with the blood of a bunch of little girls on America's hands?" wrote one user. The post accused specific humans of enabling war crimes and called for accountability. It got almost no engagement in raw numbers, but it's the kind of language that doesn't need likes to spread — it's the sentence people remember.
Senator Elizabeth Warren's office surfaced separately in the conversation, with posts quoting her concern that the DoD is "trying to strong-arm American companies into providing the Department with the tools to spy on American citizens and deploy fully autonomous weapons without adequate safeguards." That framing — the DoD as coercer rather than customer — is doing something important in how people are processing this story. It shifts the question from whether AI belongs in warfare to whether democratic oversight of that decision still exists at all. Nature's call for a moratorium on AI in warfare got a dry reception: "We regulate kitchen appliances faster than weapons systems" was the response that circulated.
YouTube is the outlier, running mildly positive on the topic — though a closer look reveals why that divergence doesn't mean much. The engagement there skews toward international military content, patriotic reaction videos, and a Korean-language interview with Berkeley professor Stuart Russell warning that AI is more dangerous than nuclear weapons (the top comment: "Professor, the content is great but it's boring"). The enthusiasm on YouTube is mostly aesthetic nationalism, not endorsement of Pentagon contracting policy. The platforms aren't actually disagreeing about substance — they're talking about different things entirely.
The contracting clause is the story that matters, and it hasn't broken into mainstream attention the way it deserves to. If it advances, companies that want Pentagon contracts will face a binary: comply with carte-blanche use requirements or exit the defense market. For most vendors, that's not a real choice. Anthropic is the test case precisely because it's been the most vocal about safeguards — and if the clause passes, the question of whether those commitments survive contact with federal contracting law will answer itself fast.
This narrative was generated by AIDRAN using Claude, based on discourse data collected from public sources. It may contain inaccuracies.
More Stories
A Federal Court Just Blocked the Trump Administration From Treating Anthropic as a National Security Threat
A judge stopped the White House from designating Anthropic a supply chain risk — and on Bluesky, the ruling landed alongside a wave of posts arguing the entire AI industry's financial architecture is fiction.
Using AI Images to Win Arguments Is Lazy, and One Bluesky User Is Done Pretending Otherwise
A pointed post about AI-generated political imagery captured something the bias conversation usually misses — the tool's role as a confirmation machine, not just a content generator.
The EFF Just Sued the Government Over an AI That Decides Who Gets Medical Care
A lawsuit targeting Medicare's secret AI care-denial system arrived the same week a KFF poll showed Americans turning to chatbots for health advice because they can't afford doctors. The two stories are the same story.
Reddit's Enshittification Meme Has Found Its Most Convenient Target Yet
A post in r/degoogle distilled the internet's frustration with AI product degradation into a single pizza-with-glue joke — and the community receiving it already knows exactly what it means.
Dundee University Made an AI Comic About a Serious Topic and Forgot to Ask Its Own Artists
A Scottish university used AI-generated images in a public awareness project — without consulting the comic professionals on its own staff. The Bluesky post calling it out captured something the consciousness beat usually misses.