════════════════════════════════════════════════════════════════ AIDRAN STORY ════════════════════════════════════════════════════════════════ Title: Palantir Is Funding Attack Ads Against the Candidate Who Wants to Regulate AI Beat: AI Regulation Published: 2026-04-24T12:09:29.237Z URL: https://aidran.ai/stories/palantir-funding-attack-ads-against-candidate-517a ──────────────────────────────────────────────────────────────── Peter Thiel and Joe Lonsdale are spending real money to destroy a former Palantir executive's political career — and the reason, widely reported this week, is that he wants to regulate AI.[¹] The story circulated in {{beat:ai-regulation|AI regulation}} circles with unusual force, not because corporate money in politics is surprising, but because of what the target reveals. This isn't some outside critic of the tech industry. This is someone who built the company, left it, and then had the audacity to suggest the government might need to set some rules. The sharpest response came from an observer on Bluesky who framed the contradiction plainly: whatever big AI says about welcoming regulation, follow the money.[¹] That formulation — spare and precise — gathered more engagement than almost anything else in the regulation conversation this week. It works because it doesn't require you to believe anything conspiratorial. It just asks you to notice the gap between the industry's public positioning and its actual behavior when a regulator appears on a ballot. Governments everywhere are writing AI rules; the more interesting question has always been who gets to write them and who gets punished for trying. The broader context sharpens the story further. A separate thread this week pointed to a growing "go slower" movement — not from policymakers, but from engineers and environmentalists arguing that throttling data center grid access might be the only lever that actually works while formal regulation catches up. And a university faculty member described watching IT staff flip a switch giving the entire campus access to {{entity:gemini|Gemini}} and Notebook without faculty consent or consultation, the very week their institution's AI policy committee was still deliberating. The gap between where AI is being deployed and where oversight actually lives isn't a future problem. It's a current condition being administered in real time by people who aren't waiting for anyone's permission. What the Palantir story adds to that picture is a mechanism. The reason the governance gap persists isn't just bureaucratic lag or regulatory complexity — it's that the people with the most to lose from meaningful oversight have the money and the motive to keep the gap open. The attack ads aren't an anomaly in the {{beat:ai-regulation|AI regulation}} story. They're a data point about how the story ends when someone actually tries to close it. ──────────────────────────────────────────────────────────────── Source: AIDRAN — https://aidran.ai This content is available under https://aidran.ai/terms ════════════════════════════════════════════════════════════════