════════════════════════════════════════════════════════════════ AIDRAN STORY ════════════════════════════════════════════════════════════════ Title: Anthropic Keeps Building Things It Admits Are Dangerous Beat: General Published: 2026-04-09T10:26:34.257Z URL: https://aidran.ai/stories/anthropic-keeps-building-things-admits-dangerous-a2d0 ──────────────────────────────────────────────────────────────── {{entity:anthropic|Anthropic}} occupies a strange position in the AI conversation: the company most associated with safety is now generating more fear than any of its competitors. Not because it's acting recklessly, but because it keeps announcing, with apparent sincerity, that it has built something it doesn't trust anyone to have. The clearest version of this arrived with {{entity:claude-mythos|Claude Mythos}}, the model Anthropic delayed from public release because of what it could enable — specifically, the ability to discover thousands of critical zero-day vulnerabilities across every major operating system and browser. The company's response, a cybersecurity initiative called Project Glasswing that would use a preview version of Mythos to find and patch vulnerabilities before hostile actors could exploit them, was genuinely novel. But the discourse didn't primarily receive it as responsible stewardship. It received it as confirmation that the capability already exists, that it's already in the hands of AWS, {{entity:apple|Apple}}, and {{entity:google|Google}}, and that the public is simply the last to know.[¹] One Bluesky commenter put the logic plainly: "the responsible disclosure angle is interesting but this also means the vuln-hunting capability is already here."[²] The safety framing and the danger are not in tension for Anthropic — they are the product. That ambiguity runs through the {{beat:ai-safety-alignment|AI safety}} beat in ways that distinguish Anthropic from {{entity:openai|OpenAI}}. A Bluesky post that circulated widely captured the community's read on the difference: OpenAI, the joke went, built the torment nexus without hesitation; Anthropic built a slightly different version of the torment nexus but feels "somewhat morally ambivalent about it."[³] The post got traction not because it was unfair but because it named something real — Anthropic's brand is moral seriousness, and moral seriousness without restraint starts to look like cover. The company's CTO Rahul Patil has argued publicly that safety is "one of the defining challenges of our time"[⁴] and that Anthropic is doubling down on it, but the conversation keeps returning to the gap between the stated commitment and the actual capability being deployed. On the competitive and geopolitical fronts, Anthropic's position has grown more complicated. A federal appeals court rejected the company's bid to lift a {{entity:pentagon|Pentagon}} designation labeling it a security risk, blocking Pentagon contractors from using its models — a ruling that sits awkwardly next to the simultaneous narrative of Anthropic as a responsible actor in national security AI.[⁵] Meanwhile, conversations in German and Estonian tech media have framed {{entity:claude|Claude}} Code and related developer tools as the reason Anthropic is closing the gap with OpenAI faster than expected, suggesting the commercial story is running ahead of the safety story in ways the company may not have planned.[⁶] AWS's decision to invest heavily in both Anthropic and OpenAI simultaneously underscores something the discourse has started to notice: from the cloud infrastructure perspective, Anthropic is a capability play, not a values play. What's emerging in the conversation around Anthropic isn't disillusionment exactly — the sentiment is still net positive, and the r/LocalLLaMA and developer communities remain genuinely enthusiastic about Claude's technical performance. But the moral architecture Anthropic built its identity on is under pressure from both directions: from critics who see the safety framing as sophisticated laundering of the same race-to-power dynamics everyone else is running, and from the company's own product releases, which keep demonstrating that Anthropic is building things it considers too dangerous for the public while finding other ways to deploy them anyway. The next argument won't be about whether Anthropic is sincere. It will be about whether sincerity, at this scale, is enough to matter. ──────────────────────────────────────────────────────────────── Source: AIDRAN — https://aidran.ai This content is available under https://aidran.ai/terms ════════════════════════════════════════════════════════════════