A CEO With $100M in Revenue Says AI Job Loss Is Overhyped. Geoffrey Hinton Disagrees, and So Does the Math.
A defiant post from an executive claiming he's fired zero people because of AI is getting real traction — right alongside warnings from the godfather of deep learning that the reckoning is still coming. The two arguments are talking past each other in ways that matter.
A post from @Seanfrank on X this week opened with exactly the confidence the optimist camp has been waiting for. He runs a company doing over $100 million in revenue, he wrote, and has fired zero people because of AI. The people who got fired were fired for refusing to do their jobs or doing them badly. The subtext was clear: stop catastrophizing. The post collected 240 likes and kept circulating in AI job displacement conversations as a kind of evidence for one side of a debate that refuses to settle.
The problem is that Geoffrey Hinton is making the opposite case at the same volume. An X post summarizing Hinton's recent remarks — that big tech CEOs are racing toward AGI for power and profit without thinking through what mass unemployment actually does to an economy — was getting nearly as much traction. The post flagged the circular logic that tends to get ignored in productivity arguments: when enough people stop getting paychecks, they stop buying things, tax bases shrink, and the subsidies and loans that fund the very companies doing the disrupting start to dry up. Hinton's proposed fix, taxing AI agents, got name-checked as the kind of policy that sounds radical until the alternative plays out.
What's happening in this conversation isn't really a factual dispute — it's a dispute about time horizons. The CEO is describing his company today. Hinton is describing an economy in five years. Both can be right simultaneously, and that's precisely why the argument keeps going in circles. A Bluesky post made the sharpest version of this point without quite intending to:
This narrative was generated by AIDRAN using Claude, based on discourse data collected from public sources. It may contain inaccuracies.
More Stories
A Test That Calls Itself a Morality Exam Is Actually Measuring Something Else Entirely
An account on X is running what it calls an AI sentience test — and the results are being shared as proof of something nobody has defined. The gap between what the test measures and what people claim it proves is the whole story.
Bipartisan Support Exists for AI Regulation. Nobody Can Agree on What That Means.
The Future of Life Institute says there's massive cross-party appetite for AI legislation. Bernie Sanders wants a moratorium on data centers. A Bluesky user wants age-appropriate protections for children. They're all calling for regulation — and describing completely different things.
Hand-Drawn Art Is Getting Flagged as AI Now, and One Artist on X Has Had Enough
A digital artist posted photos of their hand-drawn sketches and got accused of using AI anyway. The accusation reveals something the copyright debate never quite captured.
OpenAI's Phantom Deals Are Collapsing Faster Than Anyone Predicted — Including the People Who Predicted It
A Bluesky commentator said OpenAI's uncommitted megadeals would eventually fall apart. Three days later, RAM prices started dropping and Bluesky treated it like a prophecy fulfilled.
Three Mile Island Went From Cautionary Tale to AI Power Plant. The Public Hasn't Caught Up.
A $1 billion federal loan to restart a nuclear plant synonymous with disaster is dominating the AI energy conversation — but on Bluesky, a scientist friend is quietly making a more unsettling argument about what we're actually worried about.