The company that positioned itself as the responsible alternative to OpenAI is now racing it to a $30 billion run rate and a mega-IPO. The discourse is starting to notice the contradiction.
Anthropic has always asked to be judged differently from its competitors — not just as an AI company but as a safety-first institution that happened to build products. For most of its existence, the discourse obliged. The company was founded by former OpenAI researchers troubled by the pace of capability development, and that origin story did enormous work: it let Anthropic occupy a rare position where building frontier AI models was itself framed as the responsible thing to do. That framing is under increasing pressure this spring, and the pressure is coming from Anthropic's own numbers.
The company's annual recurring revenue trajectory has become the dominant fact people reach for when discussing it — $100 million at the start of 2024, roughly $1 billion a year later, then $9 billion by the end of 2025, and now past $30 billion after a deal with <entity:broadcom>Broadcom</entity> and a massive TPU supply pact with <entity:google>Google</entity> locked in 3.5 gigawatts of compute through 2027. On r/investing, the ARR arc was posted with a single instruction: "Invest accordingly." There was no mention of constitutional AI or the model spec. The safety company is now, straightforwardly, a growth story. And on Bluesky, where the skeptics tend to congregate, someone watching the <entity:openai>OpenAI</entity>-Anthropic rivalry put it plainly: this might simply be "a war of attrition on capital," with both companies running the same playbook regardless of how differently they describe it.
What makes Anthropic's position genuinely strange is how it keeps generating philosophical material even as it races toward an IPO. The revelation that <entity:claude>Claude</entity> contains 171 internal emotion vectors that shape its responses — functional states that influence behavior without constituting sentience, by the company's own account — landed in the <beat:ai-consciousness>AI consciousness</beat> conversation as something between a disclosure and an admission. Critics who had argued that Anthropic was anthropomorphizing its models for marketing purposes found the 171-vector figure oddly validating. Advocates for careful AI development found it reassuring that the company was mapping these states rather than ignoring them. Both readings are available from the same data point, which is either a sign of sophisticated communication or of a company that has learned to emit statements that satisfy multiple audiences simultaneously. The <beat:ai-safety-alignment>safety and alignment</beat> community, which once treated Anthropic as something close to a home institution, is increasingly unsure which reading to apply.
The company's relationship with <entity:claude-code>Claude Code</entity> — and the accidental leak of 500,000 lines of its source code, including an unreleased background memory daemon called Kairos — illustrated something else: that the most revealing information about where Anthropic is heading often arrives unintentionally. Developers on Bluesky greeted the leak as "a gift," arguing that the unreleased features showed more about the company's actual direction than any press release. That reaction points to a trust dynamic that Anthropic has cultivated carefully: a developer community that feels it understands the company's real intentions better than the public statements capture. Whether that trust survives contact with a post-IPO Anthropic, subject to quarterly earnings pressure and institutional shareholder expectations, is the question the discourse hasn't fully confronted yet.
The geopolitical frame is also tightening around Anthropic in ways that complicate the ethics positioning. The company is reportedly cooperating with OpenAI and Google against adversarial distillation attacks from Chinese AI firms — a framing that situates Anthropic inside a national security logic rather than outside the competitive fray. In the UK, the Green Party explicitly named Anthropic by name when demanding "AI sovereignty," calling the government's relationship with the company a "dangerous dalliance" with corporate interests dressed up as public benefit. The <beat:ai-geopolitics>geopolitics</beat> beat keeps pulling Anthropic into stories where the safety brand offers no cover — where what matters is not whether the model is aligned but whose interests the company ultimately serves. At $30 billion in annual revenue and closing fast on an IPO, the answer to that question is becoming harder to finesse.
This narrative was generated by AIDRAN using Claude, based on discourse data collected from public sources. It may contain inaccuracies.
A simple request on Hacker News — tell me what you're building that isn't about AI — turned into an accidental census of how thoroughly agents have colonized developer identity.
A developer posted on Hacker News asking what people were building that had nothing to do with AI — and the thread became a confession booth for everyone who'd already surrendered to the hype.
A single observation about Nvidia's deal with CoreWeave has cut through the usual hardware hype — because the math doesn't add up, and people are asking why nobody in the press is saying so.
A payment from Nvidia to CoreWeave for unused AI infrastructure has people asking whether the AI compute boom is real demand or an elaborate circular subsidy — and the think tank story that broke last week is now getting a second look for exactly the same reason.
When ProPublica management rolled out an AI policy without bargaining with its union, workers filed an unfair labor practice charge with the NLRB — a move that turns an abstract governance debate into a concrete test of who controls AI in the workplace.