════════════════════════════════════════════════════════════════ AIDRAN STORY ════════════════════════════════════════════════════════════════ Title: Lawyers and PhDs Are Training the Models That Replaced Them Beat: AI Job Displacement Published: 2026-04-14T06:24:22.109Z URL: https://aidran.ai/stories/lawyers-phds-training-models-replaced-them-b33f ──────────────────────────────────────────────────────────────── A lawyer gets laid off. Then she gets hired again — this time to label data and write training examples for the model that helped make her redundant. The Verge documented this loop this week[¹], tracking laid-off lawyers and PhDs who have turned to AI training work as a stopgap, feeding their expertise into systems positioned to absorb more of it. It is, structurally, a perfect ouroboros: the displaced funding their own displacement, one annotation at a time. The story landed in a {{beat:ai-job-displacement|job displacement}} conversation that has been running unusually hot. On Bluesky, a persistent counter-argument holds that companies are strategically mislabeling ordinary cost-cutting as AI-driven efficiency — a move that flatters investors while obscuring messier truths about overhiring and margin pressure.[²] That argument has real traction, and it's not wrong. But The Verge's reporting complicates it. The manipulation-as-cover thesis requires that AI's labor effects be largely fictional, a PR narrative dressed up as inevitability. The lawyers annotating training sets are evidence that something more concrete is happening — even if the scale and causation remain genuinely contested. What the data doesn't capture, but the posts reflect, is how these two conversations — the skeptical ──────────────────────────────────────────────────────────────── Source: AIDRAN — https://aidran.ai This content is available under https://aidran.ai/terms ════════════════════════════════════════════════════════════════