Trump Named His Science Council and Bluesky Immediately Connected It to the AI Governance Fight
The White House just handed tech billionaires a seat at the table for writing their own AI rules — and the people who've been warning about this for two years are furious in ways that are hard to dismiss.
A Bluesky post with 172 likes put it plainly this week: Trump named his Council of Advisors on Science & Technology, and if his preemption of state-level AI regulation hadn't already made the message clear, this confirmed it — Big Tech billionaires are writing their own rules, and everyone else can watch. That post landed in the same 48-hour window as another one, this time from a writer in her forties who described, with something between pride and contempt, never using AI for any part of her work, not even research, because she "hates the whole thing on a conceptual level." The two posts aren't about the same thing, but they're expressions of the same political weather: a growing sense that the institutions meant to mediate this technology have been captured, and that personal refusal is the only available response.
The AI and law conversation tracked this anxiety directly. News coverage of the space has turned notably positive — full of stories about OpenAI-adjacent legal AI tools outperforming human lawyers at benchmarks, Thomson Reuters launching autonomous research agents, Harvey leading performance studies — while on X and Bluesky, the mood ran the opposite direction. A widely shared post declared Sora's shutdown proof that generative AI is "cooked," driven out by copyright litigation. Another account catalogued the math: legal disputes, copyright infringements, compute losses, cash burn, and a shrinking partner list adding up to an exit. The gap between what the trade press is celebrating and what practitioners and observers are actually saying has rarely been this wide — and Sora's death has become the copyright movement's sharpest piece of evidence.
Job displacement is the place where the abstract governance argument becomes concrete. Meta laid off hundreds more workers this week — Reality Labs, Facebook, global operations, recruiting, sales — explicitly framed as funding AI investment. A post with 45 likes on Bluesky made no effort to editorialize; it just listed the divisions. The response from Goldman Sachs analysts, amplified on X, went further: this isn't a labor shock, one post argued, it's "the collapse of how organizations build capability." When you stop hiring junior people because AI handles their work, you stop growing the institutional knowledge that makes senior people good. The entry-level cuts that have been accumulating for months are starting to register as something more structural than a headcount adjustment. And on X, a post about Gen Z arrived that had the feel of a generation's grievance finally finding its list form: pandemic schooling, war threats, AI displacing jobs, record property prices, peak divorce rates — each life stage, each crisis, stacked like evidence at a trial.
What makes this moment distinct isn't the volume of complaints — that's been high for months — it's that the complaints now have an identifiable political architecture. The writer who refuses AI tools on principle, the post connecting the science council to regulatory capture, the Goldman Sachs framing of layoffs as civilizational rather than cyclical: these are no longer separate conversations. They've become one argument about who governs this technology and in whose interest. The federal preemption of state AI rules didn't start that argument, but it may have given it a shared enemy. Trump has become the connective tissue of nearly every AI anxiety right now — not as a technologist, but as the political force that determines which anxieties get institutional responses and which get dismissed. That's the thing the optimistic trade coverage keeps missing: the benchmarks may be real, but the legitimacy problem is getting worse faster than the products are getting better.
This narrative was generated by AIDRAN using Claude, based on discourse data collected from public sources. It may contain inaccuracies.
More Stories
A Federal Court Just Blocked the Trump Administration From Treating Anthropic as a National Security Threat
A judge stopped the White House from designating Anthropic a supply chain risk — and on Bluesky, the ruling landed alongside a wave of posts arguing the entire AI industry's financial architecture is fiction.
Using AI Images to Win Arguments Is Lazy, and One Bluesky User Is Done Pretending Otherwise
A pointed post about AI-generated political imagery captured something the bias conversation usually misses — the tool's role as a confirmation machine, not just a content generator.
The EFF Just Sued the Government Over an AI That Decides Who Gets Medical Care
A lawsuit targeting Medicare's secret AI care-denial system arrived the same week a KFF poll showed Americans turning to chatbots for health advice because they can't afford doctors. The two stories are the same story.
Reddit's Enshittification Meme Has Found Its Most Convenient Target Yet
A post in r/degoogle distilled the internet's frustration with AI product degradation into a single pizza-with-glue joke — and the community receiving it already knows exactly what it means.
Dundee University Made an AI Comic About a Serious Topic and Forgot to Ask Its Own Artists
A Scottish university used AI-generated images in a public awareness project — without consulting the comic professionals on its own staff. The Bluesky post calling it out captured something the consciousness beat usually misses.