════════════════════════════════════════════════════════════════ AIDRAN STORY ════════════════════════════════════════════════════════════════ Title: AI Is Infiltrating Science Funding. The Researchers Grading the Applications Are Furious. Beat: AI & Science Published: 2026-04-20T23:49:08.830Z URL: https://aidran.ai/stories/ai-infiltrating-science-funding-researchers-92a4 ──────────────────────────────────────────────────────────────── Somewhere in Australia, a researcher is sitting with a stack of grant applications for the Australian Research Council and trying to figure out what to do. The applications are riddled with LLM-generated content — prose that is fluent, plausible, and, in the reviewer's estimation, deeply unfair to assess against work that a human actually wrote. "Guess I'll just have to send them back as un-assessable," they wrote. "The entire research funding system could fall in a heap."[¹] That post didn't go viral. It didn't need to. It captured something that researchers in multiple countries are starting to say out loud: the pipeline for allocating scientific resources is breaking down, and nobody in charge has a plan for fixing it. The grant review problem sits alongside a subtler version playing out in classrooms. A parent on Bluesky described their teenager being assigned a research project in science class this week — on climate change, for Earth Day — with a specific requirement to use AI.[²] The absurdity cut through: a topic defined by its complexity and genuine uncertainty, assigned to a generation being trained to outsource the uncertainty to a language model. "I'm going to become the joker," the parent wrote, and the joke landed because it wasn't really a joke. The {{beat:ai-in-education|AI in education}} conversation has spent months debating whether AI should be allowed in classrooms; in some places, the mandate has already arrived and skipped the debate entirely. What unites the grant reviewer and the parent is a shared frustration with institutional capture — the way AI gets embedded into scientific and educational infrastructure not because practitioners asked for it, but because administrators decided it was inevitable. A researcher on Bluesky was more direct about what this costs: the eagerness to "collaborate" with AI in academic fields, they argued, is inseparable from an unwillingness to make those fields genuinely welcoming to underrepresented people. "Why learn how other people think and react to science when you can just spiral deeper into your own thoughts," they wrote.[³] It's a pointed critique — that AI adoption in research isn't just a tools question but a culture question, one that tends to benefit those already centered in their disciplines. This concern connects to {{story:ai-found-proteins-exist-nature-scientists-asking-1eb2|broader anxieties about what AI-generated science is actually producing}} and for whom. The irony is that AI's defenders in these spaces are also present, and their arguments aren't incoherent. One Bluesky user pushed back on critics by pointing out that a model which can't count the letters in "strawberry" but can solve frontier math problems is still an extraordinarily useful scientific instrument — the bar for replacing a Harvard spelling lab is not the bar for doing research.[⁴] That's a real point, and it deserves engagement. But it doesn't address what the grant reviewer and the ARC assessor are actually experiencing, which is not a philosophical question about capability but a practical crisis about incentive structures. When submitting an LLM-generated application becomes a rational strategy for funding, the scientific community's ability to reward genuine intellectual work degrades. {{story:openai-shuts-down-science-moonshot-pivot-tells-862a|OpenAI's decision to shutter its dedicated science team}} this year looks more significant in that light — the labs most capable of building science-specific tools are moving away from science and toward code. A New Zealand researcher put it most caustically: they were wondering, they wrote, how to insert AI into a grant proposal about cows to maximize their chances of funding under the country's new science funding scheme.[⁵] The joke only works because the premise is plausible. When the presence of AI in an application signals modernity rather than laziness — when funders are rewarding the mention of the tool rather than the quality of the thought — the incentive to use it regardless of its usefulness becomes overwhelming. That's not a prediction. It's already the calculation researchers are making. ──────────────────────────────────────────────────────────────── Source: AIDRAN — https://aidran.ai This content is available under https://aidran.ai/terms ════════════════════════════════════════════════════════════════