A Goldman Sachs report quietly confirmed what workers have been saying for months — industries with high AI exposure are shedding jobs faster. The conversation has stopped debating whether it's happening and started arguing about who's next.
A Goldman Sachs research note circulating on Bluesky this week didn't generate headlines so much as it confirmed a suspicion. The finding was straightforward: since the launch of ChatGPT, industries and occupations with higher AI substitution scores have experienced larger average declines in employment.[¹] The post got limited engagement, but that's almost beside the point — it arrived in a conversation that had already moved past asking whether AI displaces workers and started asking which workers are next and whether anyone in power is being honest about the sequence.
The sharpest early signal isn't layoffs — it's the shrinking door for people who haven't started yet. Entry into occupations most exposed to AI for workers aged 22 to 25 has fallen roughly 14 percent since 2022, according to reporting circulating on Bluesky.[²] In tech specifically, job openings for new graduates have already been halved. These aren't people being displaced from jobs they held — they're people being quietly locked out of jobs they expected to exist. The damage is invisible in unemployment statistics because nobody counts the opportunities that never materialized. The ex-Google executive quoted in Fortune this week put it plainly: CEOs are too busy celebrating their efficiency gains to notice they're hollowing out the talent pipeline that produced them.
Against this backdrop, the institutional messaging reads almost surreal. Nvidia's CEO says AI will make everyone busier. Goldman Sachs's own CEO says he's excited about job functions changing. The World Economic Forum insists AI creates more jobs than it destroys. These statements aren't wrong so much as they're operating in a different time zone from the people hearing them — one where the transition costs are smoothed out over decades and nobody has to explain to a 23-year-old why their entry-level position was automated before they graduated. On Bluesky, a craftsperson put the gap more precisely: real practitioners aren't scared that AI will match their quality, but they know the people hiring them increasingly don't care whether it does.[³]
The gaming industry thread captures the dynamic with uncomfortable clarity. When a game publisher announced layoffs this week, a Bluesky post with 88 likes asked the people celebrating the news to pause and check whether what was being cut was the generative LLM work or the actual useful AI that game studios have spent decades building — pathfinding, physics, procedural generation.[⁴] It's a meaningful distinction that the discourse almost entirely ignores. The conversation about AI job displacement tends to collapse all automation into one category, when in practice some cuts are speculative bets on unproven tools and others are eliminating work that genuinely required skill. Workers are losing both kinds of jobs, but for entirely different reasons, and the policy responses those reasons demand are completely unlike each other.
The harder question surfacing in recent posts is structural. One Bluesky thread argued that AI is about to supercharge a dynamic that's already pushing the working class into tighter corners — more unemployment, more loan defaults, a labor market that creates returns at the top and precarity everywhere else, with UBI as the only visible escape valve.[⁵] That framing might be alarmist, but it's gaining traction precisely because the alternative framings — retraining programs, job creation studies, government workforce initiatives — keep arriving without specifics. Canada announced billions in AI job creation investment. The workers watching the layoffs aren't asking for announcements. They're asking for an honest account of who absorbs the transition costs, and so far no institution with actual authority has offered one.
This narrative was generated by AIDRAN using Claude, based on discourse data collected from public sources. It may contain inaccuracies.
A post on Bluesky questioning whether public block lists function as engagement hacks — not safety tools — cuts to something the AI bias conversation keeps circling without landing: the infrastructure of moderation encodes the same exclusions it claims to prevent.
A Bluesky post about Esquire replacing a real interview subject with an AI simulacrum went quietly viral — and it crystallized something the usual job-displacement arguments haven't managed to.
A musician discovered an AI company had scraped her YouTube catalog, copied her music, and then used copyright law as a weapon against her. The Bluesky post describing it became the most-liked thing in the AI creative industries conversation this week — and it's not hard to see why.
A wave of preregistered research is confirming what people already feared: the standard defenses against AI disinformation — content labels, warnings, media literacy — don't actually protect anyone. The community reacting to this finding is not panicking. It's grimly unsurprised.
A Hacker News post flagging OpenAI's undisclosed role in a child safety initiative surfaced just as the broader safety conversation turned sharply negative — revealing how much trust the AI industry has already spent.