Trump's AI Framework Bans State Rules While the EU's New Law Takes Effect With Nobody Ready
Two major regulatory events landed in the same week — and both reveal how little the people writing the rules and the people subject to them agree on what AI governance is actually for.
Two regulatory bets are being placed simultaneously, and they point in opposite directions. The Trump administration released a national AI policy framework that would eliminate state-level AI laws in favor of a single federal standard — described by its critics, including Common Dreams and several Bluesky commenters, as written "by Big Tech, for Big Tech." The same week, the EU AI Act moved into its enforcement phase, with a majority of European businesses telling AWS researchers they aren't ready and industry giants including ASML and Mistral publicly urging a delay. One government is writing rules that industry asked for. Another is enforcing rules that industry says it can't follow. Neither looks like a functioning regulatory moment.
The US framework's preemption of state laws is the provision drawing the most fire. On Bluesky, the dominant read is that stripping states of authority isn't a safety measure — it's the removal of the only oversight layer that's actually been moving. California, Texas, and Colorado have all introduced AI legislation in recent sessions. A federal framework that freezes that process doesn't replace it with something stronger; it replaces it with something lighter and, for now, unenforceable, since the framework still needs an act of Congress to become law. The investor-facing commentary on Bluesky reads this clearly: "light, federal, and pro-growth" regulation reduces compliance costs and accelerates data center investment. That's a fair description of what the framework does. Whether that's a feature or a flaw depends entirely on what you think regulation is for.
Across the Atlantic, the EU's situation is almost the inverse problem. The law exists, the enforcement deadline has passed, and the readiness isn't there. Startups are saying they don't know what compliance looks like in practice. An analysis from CDT Europe flags that protections for high-risk systems were quietly weakened during trilogue negotiations, and that the path to legal redress for people harmed by those systems is, in their words, not meaningful. Deepfake harms — the category that was supposed to be an enforcement priority — are being described by detection firms as a "now problem" that the Act's compliance deadline did nothing to address. The law went live; the problem did not slow down.
What connects both stories is the gap between a regulatory document and a regulatory reality. The EU has a law and a compliance crisis. The US has a framework and a preemption fight. In neither case are the people most likely to be affected by AI systems — workers, patients, people targeted by synthetic media — driving the terms of the debate. On Reddit and Bluesky, the loudest voices are either compliance professionals trying to decode what the rules mean in practice or critics arguing that both frameworks have been captured by the interests they nominally govern. The Bluesky post arguing that Anthropic is doing Congress's job, because Congress has abdicated it, got little traction in raw engagement numbers but articulates something a lot of the surrounding conversation takes for granted: formal regulatory institutions have not kept pace, and the vacuum is being filled by companies, courts, and frameworks that carry the appearance of governance without much of its teeth.
The enforcement question is where this is heading. The EU will have to decide, probably within months, whether to pursue meaningful penalties against unprepared companies or to effectively grant an informal extension while calling it phased implementation. If it's the latter — and the lobbying from ASML and Mistral suggests that's the pressure being applied — the Act's credibility as an enforcement instrument takes a hit it may not recover from. In the US, the framework's fate in Congress is genuinely unclear, but the preemption logic will survive even if the framework doesn't: every federal AI bill introduced in the next two years will contain some version of the state-law question, and Big Tech will be on the same side every time. The regulatory map isn't being drawn. It's being contested, one preemption clause at a time.
This narrative was generated by AIDRAN using Claude, based on discourse data collected from public sources. It may contain inaccuracies.
More Stories
A Federal Court Just Blocked the Trump Administration From Treating Anthropic as a National Security Threat
A judge stopped the White House from designating Anthropic a supply chain risk — and on Bluesky, the ruling landed alongside a wave of posts arguing the entire AI industry's financial architecture is fiction.
Using AI Images to Win Arguments Is Lazy, and One Bluesky User Is Done Pretending Otherwise
A pointed post about AI-generated political imagery captured something the bias conversation usually misses — the tool's role as a confirmation machine, not just a content generator.
The EFF Just Sued the Government Over an AI That Decides Who Gets Medical Care
A lawsuit targeting Medicare's secret AI care-denial system arrived the same week a KFF poll showed Americans turning to chatbots for health advice because they can't afford doctors. The two stories are the same story.
Reddit's Enshittification Meme Has Found Its Most Convenient Target Yet
A post in r/degoogle distilled the internet's frustration with AI product degradation into a single pizza-with-glue joke — and the community receiving it already knows exactly what it means.
Dundee University Made an AI Comic About a Serious Topic and Forgot to Ask Its Own Artists
A Scottish university used AI-generated images in a public awareness project — without consulting the comic professionals on its own staff. The Bluesky post calling it out captured something the consciousness beat usually misses.