════════════════════════════════════════════════════════════════ AIDRAN STORY ════════════════════════════════════════════════════════════════ Title: Science Has an AI Vocabulary Problem, and Researchers Are Losing It Beat: AI & Science Published: 2026-03-20T20:00:50.065Z URL: https://aidran.ai/stories/institution-researcher-split-ai-science-harder-ed50 ──────────────────────────────────────────────────────────────── A researcher on Bluesky recently articulated something that a lot of her peers seemed to have been feeling but hadn't quite said aloud: she was deliberately using the word "AI" more often in her posts, she explained, to "take it back from tech bros" who treat large language models as if they're the whole of artificial intelligence — erasing, in the process, decades of computational tools that actual science runs on. The post was weary rather than angry. That's the more interesting register. Meanwhile, {{entity:nature|Nature}} Medicine is publishing "new era" pieces. Science journalists are writing about machines taking on dominant roles in research. The institutional science press and the scientists themselves are having almost entirely different conversations, and the distance between them has opened into something that looks less like a disagreement about facts and more like a disagreement about what the word "AI" is allowed to mean. On Bluesky, where working researchers cluster, sentiment hovers just below neutral. In science news coverage, it sits near the ceiling. That gap is wide enough to be a story in itself. What's circulating in the researcher community isn't hostility to computation — it's a specific anxiety about categorical capture. One thread that keeps resurfacing: the worry that "AI" has been colonized by commercial LLMs in ways that make it harder to defend the tools researchers actually use and trust. A separate argument, visible in a post that got modest traction but real engagement, holds that science cannot be automated at all — not because the models aren't capable enough, but because scientific judgment is normative in ways that any current system will fail at. The post that got the warmest reception framed AI like a microscope: an instrument that extends human capability rather than replacing the human doing the work. The posts that fell flat were the ones using the word "dominate." The conversation has expanded fast over the past several days — not because of a single viral moment, but through sustained, broad-based churn across platforms, the kind that usually means a community is working something out rather than just reacting. What's being worked out is a definitional fight that institutions have largely already resolved in favor of the LLM framing, and that researchers have not. The scientists who study AI's role in science are increasingly finding themselves in the position of having to clarify, every time they speak, that they don't mean that kind of AI. Losing the vocabulary is not a small thing. It shapes what gets funded, what gets published, and whose skepticism gets dismissed as technophobia. ──────────────────────────────────────────────────────────────── Source: AIDRAN — https://aidran.ai This content is available under https://aidran.ai/terms ════════════════════════════════════════════════════════════════