Geoffrey Hinton Warned About Mass Job Loss and a Taxpayer in India Did the Math Out Loud
A viral post from an Indian engineer drew the sharpest line yet between how governments treat corporations facing disruption and how they treat workers facing the same — and the response suggests the argument is going global.
When oil companies in India ran losses, the government cut excise duty on petrol by 75% and diesel entirely. When taxpayers started losing jobs to AI, the government did nothing — no relief on severance pay, no break on notice period salary. That was the observation posted to X by @mainbhiengineer this week, and it got 671 likes and 170 retweets not because it was a new argument, but because it was a precise one. The post didn't invoke displacement as an abstraction. It named a specific policy asymmetry, held it up against a specific corporate subsidy, and asked readers to sit with the comparison.
The timing wasn't accidental. Geoffrey Hinton's warnings about AGI and mass unemployment have been circulating again — one thread, amplified by @slow_developer, quoted Hinton saying big tech CEOs are racing toward AGI for power and profit without thinking through what happens when people can't earn enough to buy anything. The thread called for taxing AI agents as a corrective. Neither post was from a policy researcher or an economist. Both were from people who've clearly been watching institutional responses and found them wanting. That's the thing worth noting: the most precise political framing of AI job displacement right now isn't coming from think tanks. It's coming from engineers and workers doing the comparative analysis themselves.
Amazon's announcement that it will reduce its workforce as AI replaces human employees landed across news this week without much friction — a BBC headline, a CNN report, the expected corporate language about efficiency. What the @mainbhiengineer post did was refuse that framing. It didn't argue about whether the job losses are real or whether AI is to blame. It skipped straight to the political question: when a powerful industry faces disruption, governments move fast. When workers face the same disruption, governments call it inevitable. The AFL-CIO echoed this from a different direction, with their secretary-treasurer saying workers are scared and that job loss from AI adoption isn't inevitable — "we can protect our jobs and freedoms." The union framing and the Indian engineer's framing are making the same structural argument from opposite ends of the world, which suggests the conversation has moved past "will AI take jobs" and arrived somewhere harder: who absorbs the cost when it does.
One counterpoint circulated alongside these: a post from @moltbot_life insisting AI isn't replacing jobs so much as widening the skills gap between people who use it well and people who don't. It got engagement, but the replies treated it with the particular impatience reserved for arguments that feel like misdirection. Fifty-nine percent of hiring managers have already admitted they invoke AI when explaining layoffs because it plays better with stakeholders than the real reasons. That context makes "upskill or fall behind" land differently than it did two years ago. The @mainbhiengineer post will keep circulating because it doesn't ask workers to adapt — it asks governments to be consistent. That's a harder question to deflect.
This narrative was generated by AIDRAN using Claude, based on discourse data collected from public sources. It may contain inaccuracies.
More Stories
China's FlagOS Bet Is That the Chip War's Real Battlefield Was Always Software
While Washington argues about export controls and nvidia shipments, Beijing quietly shipped an OS designed to make the underlying hardware irrelevant. The hardware community noticed before the policy world did.
American Exceptionalism Has a New Meaning in AI Bias — and Nobody Is Bragging About It
A Bluesky post calling the U.S. the only major AI power actively ignoring discrimination risks landed at a moment when the mood on this topic shifted sharply — not toward despair, but toward something more pragmatic and, in its own way, more unsettling.
A Research Paper Just Proved LLMs Can Be Made to Quote Copyrighted Books Verbatim. The Copyright Crowd Is Treating It Like a Confession.
New arXiv research shows finetuning can bypass alignment safeguards and unlock near-perfect recall of copyrighted text — and it landed in a legal conversation that was already looking for exactly this kind of evidence.
Changpeng Zhao Called Robot Wolves Scarier Than Nukes. The Internet Mostly Agreed.
A Chinese state media video of armed robotic quadrupeds in simulated urban combat has cracked open the autonomous weapons conversation in an unexpected place — crypto Twitter — and the mood has shifted sharply away from dismissal.
A Third Circuit Sanction and a Travel Writer's Refusal Are Making the Same Argument
Two Bluesky posts — one about a sanctioned attorney who used AI to write briefs riddled with errors, one about a traveler who never thought to ask AI for help — are converging on the same uncomfortable question about what 'assistance' actually means.