A sprawling investigation into Sam Altman's decade of claims about AI capabilities landed on Bluesky this week and found an audience primed to believe every word of it.
Ed Zitron published a 17,000-word guide he calls "The Hater's Guide To OpenAI" this week, framing it as a decade-long accounting of Sam Altman's claims about the capabilities and economics of generative AI — and the gap between those claims and reality. [¹] The post drew 545 likes on Bluesky within hours, substantial engagement for a paywalled piece dropped into a community that already runs skeptical. The newsletter promoted itself with a stark conclusion: "This company cannot be allowed to go public."
What made it land wasn't the length or the argument's novelty — critics of OpenAI have been making versions of this case for years. It was the timing. The post arrived in a week when the AI ethics conversation had turned sharply darker across platforms simultaneously, with posts that would have read as cautious criticism a month ago now reading as restraint. A separate commenter characterized Altman's method bluntly, describing a pattern of attaching himself to powerful people and exploiting their appetite for influence — naming Microsoft, NVIDIA, and SoftBank as co-conspirators in whatever harm follows.[²] Neither post hedged. Both treated the question of OpenAI's public offering not as a business story but as a moral emergency.
The anger isn't uniform. A quieter post pushed back on what it called lazy AI criticism — the kind that still mocks six-fingered AI hands when the technology has moved far past that — warning that dismissing current capabilities would produce its own backlash.[³] And Anthropic's rollout of Mythos generated a different kind of unease: industry insiders described a model that, in the words of one Anthropic employee, "should feel terrifying," while others praised the company's caution.[⁴] The two reactions — OpenAI as cynical fraud, Anthropic as responsible but frightening — are doing something interesting together. They're not opposites. They're a picture of an industry where even the cautious actors admit the thing they're building is something to fear.
Zitron's piece is, at its core, an argument about a credibility gap that has been widening for years. The public offering framing sharpens it: an IPO would lock in valuations built on claims about capabilities that Zitron argues were always overstated, rewarding the people who made those claims before the reckoning arrives for everyone else. Whether the piece changes any minds in the institutions that matter — investors, regulators, the journalists who've spent years on access beats with Altman — is a different question. The 545 people who liked it on Bluesky already believed it.
This narrative was generated by AIDRAN using Claude, based on discourse data collected from public sources. It may contain inaccuracies.
When the Pentagon designated Anthropic a supply chain risk for refusing to arm autonomous weapons, the online reaction started with outrage at the government. It's migrated somewhere more unsettling.
A Nature study caught AI validating a fake disease. A Wired reporter found Meta's health chatbot drafting eating disorder meal plans. The medical professionals building this future won't touch it themselves.
A controlled experiment in medical misinformation found that AI systems will validate illnesses that don't exist — and the Hacker News thread unpacking it has become one of the more unsettling reads in recent AI-and-science discourse.
Elon Musk's AI company has filed suit against Colorado's landmark anti-discrimination law — and the online conversation around AI bias has turned anxious in a way that's hard to separate from everything else piling up.
The founder of Khan Academy once predicted AI would transform education faster than anything before it. His own AI tutor has turned that prediction into a cautionary tale — and the ed-tech community is watching.