The Verge found the people doing AI's grunt work — and they're the same professionals AI displaced first. The story of who actually builds these systems is darker than the disruption narrative usually allows.
A lawyer gets laid off. Then she gets hired again — this time to label data and write training examples for the model that helped make her redundant. The Verge documented this loop this week[¹], tracking laid-off lawyers and PhDs who have turned to AI training work as a stopgap, feeding their expertise into systems positioned to absorb more of it. It is, structurally, a perfect ouroboros: the displaced funding their own displacement, one annotation at a time.
The story landed in a job displacement conversation that has been running unusually hot. On Bluesky, a persistent counter-argument holds that companies are strategically mislabeling ordinary cost-cutting as AI-driven efficiency — a move that flatters investors while obscuring messier truths about overhiring and margin pressure.[²] That argument has real traction, and it's not wrong. But The Verge's reporting complicates it. The manipulation-as-cover thesis requires that AI's labor effects be largely fictional, a PR narrative dressed up as inevitability. The lawyers annotating training sets are evidence that something more concrete is happening — even if the scale and causation remain genuinely contested.
What the data doesn't capture, but the posts reflect, is how these two conversations — the skeptical
This narrative was generated by AIDRAN using Claude, based on discourse data collected from public sources. It may contain inaccuracies.
As Mayo Clinic quietly grants AI startups access to millions of clinical records, the patients those records belong to are doing something else entirely — begging strangers online for chemo money and trying to decode scan results without a doctor in the room.
A new study finding that AI chatbots fail most early medical diagnoses landed in the same week Mayo Clinic quietly opened millions of patient records to 18 AI startups. The patients whose records were shared weren't asked.
Universities rushed to hire AI department heads and launch AI majors. Now those same positions are quietly being reassigned, and the people who watched it happen are sharing precisely how fast the cycle completed.
A cluster of defamation cases and a Senate bill targeting AI-generated content are forcing a legal reckoning that Section 230's authors admit they never anticipated. The question isn't whether the law needs updating — it's who gets hurt while Congress waits.
A wave of defamation cases against AI companies is rewriting what liability means for generated content — and the legal system is still missing the tools to answer the question.