════════════════════════════════════════════════════════════════ AIDRAN STORY ════════════════════════════════════════════════════════════════ Title: Pete Hegseth Wants AI Weapons. Anthropic Said No. The Argument Is Just Getting Started. Beat: AI & Military Published: 2026-04-27T14:04:01.321Z URL: https://aidran.ai/stories/pete-hegseth-wants-ai-weapons-anthropic-said-cf1d ──────────────────────────────────────────────────────────────── One voice on Bluesky put the current moment as plainly as anyone has: "Absolute bombshell. Palantir explicitly admits the American cultural empire is totally dead. The tech oligarchs and the {{entity:pentagon|Pentagon}} are now relying entirely on high tech killing machines and AI weapons to enforce global dominance. They are the actual unelected government."[¹] Eleven likes — not viral, not widely shared — but the comment landed in a community that has been reading {{story:palantir-published-manifesto-reaction-tells-f5f5|Alex Karp's 22-point manifesto}} as a kind of confession rather than a defense. That framing — that Palantir's belligerence is a reveal, not a sales pitch — is gaining ground. The conversation's center of gravity right now is the triangle between {{entity:anthropic|Anthropic}}, the Pentagon, and Pete Hegseth. Reporting that Hegseth pressured Anthropic to allow its software for autonomous weapons and other lethal purposes — with Anthropic refusing — has become the animating conflict in how people are thinking about military AI governance.[²] {{story:trump-banned-anthropic-pentagon-ceo-called-relief-b330|When the White House subsequently banned Anthropic from Pentagon contracts}}, Anthropic's CEO described the outcome as something close to relief — a reaction that cut sharply against any assumption that AI companies are uniformly chasing defense dollars. The community reading that story isn't pro-Anthropic so much as stunned that a company voluntarily walked away from government money on principle, and arguing about whether that principle will hold. What's sharpening the edges of this argument is {{story:school-bombed-iran-170-dead-ai-targeting-system-09ba|the school in Minab}}. A bombing that killed 170 civilians — with no alert from the AI targeting system involved — has circulated with a particular kind of weight that abstract autonomous-weapons debates rarely carry.[³] Commenters aren't relitigating whether AI should be used in warfare; they're noting that the system failed in the specific way critics always said it would, silently and without {{entity:accountability|accountability}}. One Bluesky post framed the absence of an alarm as more damning than the bomb itself — and that framing, the idea that the silence is the scandal, is exactly where this conversation has moved. {{story:autonomous-weapons-almost-argument-already-2640|The argument about what to do with autonomous weapons}} was already fractured before Minab. Now it has a concrete case to argue through. Running underneath both threads is a harder conversation about political economy. Several posts have flagged that SOCOM's 2024 budget request explicitly names "autonomous lethal systems"[⁴] — not as a future ambition but as a funded line item — while the public debate still treats weaponized AI as largely hypothetical. A British petition circulating on Bluesky demands the government cancel all contracts with Palantir, citing the company's opacity and its owner's political alignment.[⁵] The Financial Times has mapped out Britain's military future around submarines, drones, and AI in a defense review that commenters are reading alongside that petition with obvious discomfort. The {{beat:ai-geopolitics|geopolitical dimension}} keeps intruding: one Bluesky thread catalogued {{entity:israel|Israel}}'s AI-guided targeting operations — from the 2020 killing of Iranian nuclear scientist Mohsen Fakhrizadeh to operations in 2026 — as a numbered list that reads less like analysis than a ledger.[⁶] The cumulative effect is a community that has stopped asking whether states are using AI to kill people and started asking whether anyone is keeping score. The Terminator comparison still shows up — one commenter invoked Skynet without irony — but it's no longer the dominant register. What's replaced it is something more uncomfortable: not science fiction {{entity:anxiety|anxiety}} about machine takeover but a much more grounded alarm about human chains of command. The skeptic who wrote "I am very skeptical of AI takeover minus human controllers — current models have no inherent goals" was making a careful point, not a reassuring one.[⁷] The implication is that the danger isn't the machine acting alone. It's the machine acting exactly as instructed, at scale, with a targeting system that doesn't alert anyone when it kills 170 people at a school. {{story:anthropic-built-brand-restraint-restraint-costing-4117|Anthropic's identity as AI's responsible adult}} is being tested against exactly that scenario — and the people watching are not confident the restraint will outlast the contract pressure. ──────────────────────────────────────────────────────────────── Source: AIDRAN — https://aidran.ai This content is available under https://aidran.ai/terms ════════════════════════════════════════════════════════════════