Sanders and AOC Want to Freeze AI's Power Grid Before Congress Decides What It's For
A proposed moratorium on data center construction is the most aggressive AI legislation floated in Congress — and the question it's forcing isn't whether AI is dangerous, but who gets to decide what 'safe' means.
Bernie Sanders introduced a bill Wednesday that would halt all new data center construction until Congress passes legislation ensuring AI is safe, protects workers, and doesn't spike electricity prices. Zuckerberg, Jensen Huang, and the debate is already bigger than the bill. A post from @MorePerfectUS on X laid out the terms bluntly — ban construction, hold it there until Congress acts, let the legislation set the conditions — and drew nearly five thousand likes and almost seven hundred reposts. That number tells you something: this isn't being received as a fringe proposal. It's being received as the first legislative move that actually names the leverage point.
The leverage point is infrastructure. Every large-scale AI deployment depends on data centers. Every data center depends on power. Compute has always been the bottleneck the industry races to expand, and Sanders is betting that threatening the bottleneck is the only way to extract concessions from an industry that has successfully deflected every other regulatory approach. On Bluesky, the announcement spread with a particular kind of anxious energy — not opposition, not celebration, but the feeling of watching someone pull a fire alarm and wondering if the building was actually on fire. Posts from environmental advocates applauded the worker protection framing. Posts from skeptics worried about the moratorium becoming a weapon against infrastructure that has nothing to do with AI risk.
The timing is not accidental. Microsoft's methane-powered data center in West Virginia — which would increase the company's pollution footprint by nearly half — became a focal point weeks ago for exactly the kind of concern the Sanders bill is trying to codify: that AI expansion is happening faster than any accountability mechanism can keep pace with. The environmental costs of that expansion are now embedded in the legislative argument, even if the bill doesn't say so explicitly. The electricity price provision is the tell. That language isn't aimed at researchers worried about existential risk — it's aimed at voters in states where data centers are already straining the grid.
Meanwhile, the White House announced that Mark Zuckerberg, Jensen Huang, and Larry Ellison would join the President's Council of Advisors on Science and Technology to weigh in on AI policy. The Bluesky reaction was immediate and acidic — one post called it "Trump Rewards Big Tech's Biggest Bootlickers," and the regulatory capture framing spread fast. The juxtaposition is hard to miss: on one side, a legislative proposal that treats the industry as a threat requiring constraint; on the other, an executive advisory structure that treats the industry's founders as the people best positioned to write the constraints. Larry Ellison's own words about total surveillance have already circulated widely in this conversation, making his appointment feel less like a governance choice and less like a coincidence.
What the Sanders-AOC bill has done — whatever its legislative fate — is force the AI regulation conversation onto terrain where abstraction doesn't work. You cannot argue about whether AI is "safe" without also arguing about who pays for the power, which workers get protected, and which communities host the infrastructure. A separate thread on Bluesky noted, correctly, that the White House's own science director was simultaneously warning against a "patchwork" of state regulations while the federal government showed no signs of producing a coherent alternative. That contradiction is the actual story: the industry's preferred outcome is federal preemption of state rules combined with an advisory council it effectively controls. The Sanders bill is, among other things, a direct challenge to that preferred outcome. Whether it passes is almost beside the point. It has already changed what counts as a serious opening position.
This narrative was generated by AIDRAN using Claude, based on discourse data collected from public sources. It may contain inaccuracies.
More Stories
A Federal Court Just Blocked the Trump Administration From Treating Anthropic as a National Security Threat
A judge stopped the White House from designating Anthropic a supply chain risk — and on Bluesky, the ruling landed alongside a wave of posts arguing the entire AI industry's financial architecture is fiction.
Using AI Images to Win Arguments Is Lazy, and One Bluesky User Is Done Pretending Otherwise
A pointed post about AI-generated political imagery captured something the bias conversation usually misses — the tool's role as a confirmation machine, not just a content generator.
The EFF Just Sued the Government Over an AI That Decides Who Gets Medical Care
A lawsuit targeting Medicare's secret AI care-denial system arrived the same week a KFF poll showed Americans turning to chatbots for health advice because they can't afford doctors. The two stories are the same story.
Reddit's Enshittification Meme Has Found Its Most Convenient Target Yet
A post in r/degoogle distilled the internet's frustration with AI product degradation into a single pizza-with-glue joke — and the community receiving it already knows exactly what it means.
Dundee University Made an AI Comic About a Serious Topic and Forgot to Ask Its Own Artists
A Scottish university used AI-generated images in a public awareness project — without consulting the comic professionals on its own staff. The Bluesky post calling it out captured something the consciousness beat usually misses.