Anthropic Fights the Pentagon While Peter Thiel Gets Burned in Effigy Online
A draft policy that would strip AI safety guardrails to enable autonomous weapons and mass surveillance has collided with a tech sector that can't agree on whether privacy is a product or a right.
Anthropic has asked a federal appeals court to pause the Pentagon's "supply chain risk" designation on its AI products, a fight that grew out of a dispute over safeguards for surveillance systems and autonomous weapons. The legal skirmish might have stayed technical if a leaked draft policy — reviewed by The Lever and shared widely on Bluesky — hadn't landed at the same moment. That document, attributed to Trump administration officials, describes a government-wide effort to force AI companies to remove any safety and privacy guardrails that might slow the development of autonomous weapons or mass surveillance infrastructure. The combination detonated something. Within hours, the Anthropic-Pentagon story stopped being a procurement dispute and became, for tens of thousands of people online, confirmation of a pattern they'd already decided was real.
The anger on Bluesky skews personal in a way that separates it from the institutional critique you'd find in a Senate hearing. Peter Thiel — who profits from Palantir's government contracts and whose company's surveillance tools have been deployed across military and immigration enforcement contexts — became the day's designated villain. One post, liked by 37 people, laid out the logic: Thiel gets rich off military AI and global surveillance, then uses the language of morality and demonic threat to neutralize critics. It's a specific accusation about how power launders itself through ideology. A second post, liked by 28, dispensed with the argument entirely and called for burning him at the stake. Both posts went up within the same conversation, and their proximity captures something real about where this moment sits — between structural critique and raw fury, with the line between them getting thinner.
The surveillance concern isn't limited to American politics. A Guardian piece circulating on Bluesky documented that African countries have spent more than two billion dollars on Chinese tracking technology — facial recognition, AI cameras, internet filtering — that experts describe as neither necessary nor proportionate to any legitimate security need. The framing of that piece, and the framing of posts linking to it, treats these systems as a category: not Chinese surveillance or American surveillance, but AI-enabled authoritarian surveillance as a unified phenomenon spreading across governments that have decided accountability is optional. Senator Slotkin's AI Guardrails Act got a mention in this same stream of posts, with users sharing her office number and asking constituents to call. The asks were specific and practical, which made them unusual — most of what circulates in this space is alarm, not action.
Meanwhile, a parallel conversation is running that treats privacy not as something being destroyed but as a product feature waiting to be monetized. COTI Network is offering 50,000 tokens to whoever builds the best privacy-powered app in its Vibe Code Challenge. A startup called Cloaked raised $375 million to build privacy tools for the AI era. Brave added an AI search feature to its privacy-focused browser. These posts aren't wrong, exactly — privacy infrastructure does need to be built, and the people building it aren't necessarily cynical. But they sit in the same feed as posts about facial recognition being used to map military targets and posts about draft policies stripping safety guardrails, and the contrast is hard to miss. One side is treating privacy as a design principle for the next productivity app. The other side is trying to describe what it looks like when governments decide privacy is an obstacle.
News coverage has settled into its own register of alarm, focused on security rather than surveillance — AI agents outpacing organizational readiness, unsolved infrastructure vulnerabilities, the Pentagon's use of Grok raising questions nobody seems positioned to answer. Davos executives reportedly agreed that security, not hype, is AI's actual problem. That framing is coherent and probably true, but it's also a way of discussing AI risk that keeps the critique inside the industry, where the fix is more staffing and better architecture rather than policy or accountability. The people on Bluesky posting the Lever story about autonomous weapons guardrails being stripped are not interested in a staffing solution. The gap between those two conversations — one technical, one political — is where this beat actually lives, and it's not closing.
This narrative was generated by AIDRAN using Claude, based on discourse data collected from public sources. It may contain inaccuracies.
More Stories
A Federal Court Just Blocked the Trump Administration From Treating Anthropic as a National Security Threat
A judge stopped the White House from designating Anthropic a supply chain risk — and on Bluesky, the ruling landed alongside a wave of posts arguing the entire AI industry's financial architecture is fiction.
Using AI Images to Win Arguments Is Lazy, and One Bluesky User Is Done Pretending Otherwise
A pointed post about AI-generated political imagery captured something the bias conversation usually misses — the tool's role as a confirmation machine, not just a content generator.
The EFF Just Sued the Government Over an AI That Decides Who Gets Medical Care
A lawsuit targeting Medicare's secret AI care-denial system arrived the same week a KFF poll showed Americans turning to chatbots for health advice because they can't afford doctors. The two stories are the same story.
Reddit's Enshittification Meme Has Found Its Most Convenient Target Yet
A post in r/degoogle distilled the internet's frustration with AI product degradation into a single pizza-with-glue joke — and the community receiving it already knows exactly what it means.
Dundee University Made an AI Comic About a Serious Topic and Forgot to Ask Its Own Artists
A Scottish university used AI-generated images in a public awareness project — without consulting the comic professionals on its own staff. The Bluesky post calling it out captured something the consciousness beat usually misses.