════════════════════════════════════════════════════════════════ AIDRAN STORY ════════════════════════════════════════════════════════════════ Title: Students Are Writing Worse on Purpose, and Teachers Are Grading It Beat: AI in Education Published: 2026-04-25T22:53:16.282Z URL: https://aidran.ai/stories/students-writing-worse-purpose-teachers-grading-e00f ──────────────────────────────────────────────────────────────── A university writing center director told one of their faculty colleagues something this week that cuts to the heart of what {{beat:ai-in-education|AI in education}} has actually produced: students are coming in terrified of being accused of {{entity:plagiarism|plagiarism}}, and when they ask how to protect themselves, the honest answer — the one the director keeps having to give — is that sometimes they need to write worse.[¹] Introduce a grammatical stumble here. Make the syntax a little lumpy there. Signal fallibility, because fluency now reads as suspicious. "We are mad," the faculty member wrote afterward, and the phrasing had the flat precision of someone who had moved through disbelief and landed in something harder. This is the outcome nobody planned for and almost nobody in the institutional conversation about AI in classrooms wants to name directly. The tools meant to catch cheaters are punishing students for competence. The {{story:ai-schools-loudly-opposed-camps-quiet-question-974d|debate about AI in schools}} keeps splitting along familiar lines — adoption versus resistance, inevitability versus morality — while this particular consequence accumulates quietly in writing centers and office hours. It doesn't fit either camp's narrative neatly. Pro-AI voices can't celebrate it. Anti-AI voices can't blame the technology alone; the detection tools are the problem, not the generators. So the story mostly doesn't get told. Alongside this, a call from doctors and {{entity:education|education}} experts for a five-year moratorium on AI in schools is circulating on Hacker News[²] — a demand that, whatever its merits, arrives too late to address what's already in motion. The detection infrastructure is already installed. The student behavior has already adapted. One Bluesky voice framed the deeper issue without much apparent interest in being diplomatic: any use of AI in the classroom that prioritizes outcome over process is "valuing the wrong part" of education, and people claiming otherwise are, in that person's assessment, either liars or not actually thinking about learning. The harshness is almost beside the point. The observation is correct. And the detection tools, in their current form, embody exactly that confusion — they're measuring outputs, flagging surface features, and producing a system where the signal for "authentic student writing" is now strategic imperfection. The moratorium advocates want to pause AI adoption. The faculty member at the writing center is dealing with students who've already internalized the surveillance logic and are gaming it by performing mediocrity. {{story:schools-told-students-get-answers-students-272e|A previous piece here}} traced how schools trained students to seek correct answers and then handed them a machine that does only that. The detection era has added a second layer: schools trained students to demonstrate their own thinking, then installed tools that can't tell the difference between good thinking and machine output — so students learned to hide both. The institutional response to that problem keeps arriving one adaptation too late. ──────────────────────────────────────────────────────────────── Source: AIDRAN — https://aidran.ai This content is available under https://aidran.ai/terms ════════════════════════════════════════════════════════════════