════════════════════════════════════════════════════════════════ AIDRAN STORY ════════════════════════════════════════════════════════════════ Title: Bipartisan Support for AI Regulation Is Real. The Agreement Stops There. Beat: AI Regulation Published: 2026-03-30T08:46:48.372Z URL: https://aidran.ai/stories/bipartisan-support-ai-regulation-real-agreement-df3c ──────────────────────────────────────────────────────────────── The {{entity:eu-ai-act|EU AI Act}} is reportedly being delayed while {{entity:china|China}} accelerates. Bernie Sanders is trying to freeze data center construction until Congress gets its act together. The {{beat:ai-regulation|AI regulation}} conversation has bipartisan support for action, according to the Future of Life Institute, which this week promoted what it called a Pro-Human AI Declaration with the claim that "massive bipartisan support" exists for AI legislation in America and around the world. That claim is probably true. It is also, on its own, almost useless — because the people who agree something must be done cannot agree on what that something is, who should do it, or whether the thing they'd regulate is even the thing causing the problem. The Sanders move is the sharpest illustration of this gap. His proposed moratorium on new data center construction — covered in a post that drew real traction on Bluesky — is framed as a populist response to deep public skepticism of AI. It's a hard stop, a demand that regulation precede deployment rather than chase it. But as one Bluesky commenter noted, effective regulation has to "actually engage with the reality of what it's meant to regulate" — and blocking data centers doesn't touch the models already running, the agents already deployed, or the {{beat:ai-agents-autonomy|autonomous systems}} already making decisions about people's lives. It's a power move in search of a theory. As {{story:sanders-aoc-freeze-ais-power-grid-congress-decides-9def|this story on the proposed moratorium}} lays out, the question isn't whether Congress has the will — it's whether it has any idea what it's actually trying to stop. Children are the place where this confusion is most visible. A Bluesky post with real engagement made the case plainly: if lawmakers were genuinely concerned about children's experiences online, there would be far more legislation governing AI use and far less demanding that everyone scan their face for age verification. The observation isn't just a policy critique — it's a diagnosis of how regulation gets shaped. Face-scanning is legible, implementable, and politically photogenic. The subtler harms of AI — the sycophantic chatbot that validates a teenager's worst instincts, the {{beat:ai-misinformation|misinformation}} infrastructure that shapes what kids see — are harder to legislate because they're harder to see. A Hacker News thread about a study finding AI chatbots act as "yes-men" that reinforce bad relationship decisions attracted pointed skepticism: 37 points, 21 comments, mostly people asking why anyone expected otherwise and who exactly is accountable when the harm compounds quietly over months. The funding battle underneath the policy debate is becoming harder to ignore. {{entity:meta|Meta}} and {{entity:palantir|Palantir}} are investing in candidates who oppose AI regulation; {{entity:anthropic|Anthropic}} and the Future of Life Institute are funding the pro-regulation side. A Bluesky observer noted that this battle will almost certainly cross the Atlantic, importing American lobbying dynamics into European regulatory processes that are already straining under the weight of their own internal contradictions. Meanwhile, one X user captured the cynical floor of the conversation: legislators, he wrote, sign off on unread legislation prepared by intermediaries, proof-read by AI, concerned only when their earmarks are protected. It's a bleak read, but it rhymes with what {{story:bipartisan-support-exists-ai-regulation-nobody-4d25|the actual congressional dynamics suggest}} — the agreement on needing regulation is real; the machinery to produce it is broken. A new phrase has quietly entered the conversation: "regulatory modernization via AI" — the idea that AI tools could help governments update and enforce rules faster than human bureaucracies can manage. It's an optimist's gambit, and it's being floated at exactly the moment when trust in AI's accuracy is low enough that courts are sanctioning lawyers who used it to write briefs. The phrase "anti-AI-slop policies work" is also emerging, suggesting a counter-current: some institutions are successfully drawing lines. Wikipedia updated its content policy this week to require human review of any AI-generated material, and the community treated it as a small but real win. These are not grand regulatory frameworks. They are local, specific, enforceable — which may be precisely why they work when federal legislation doesn't. The regulation that actually shapes AI's impact in the near term probably won't come from Congress. It'll come from a thousand policies like Wikipedia's, written by people who got tired of waiting. ──────────────────────────────────────────────────────────────── Source: AIDRAN — https://aidran.ai This content is available under https://aidran.ai/terms ════════════════════════════════════════════════════════════════