════════════════════════════════════════════════════════════════ AIDRAN STORY ════════════════════════════════════════════════════════════════ Title: r/deeplearning Is Mourning the Era Before AI Was Called AI Beat: AI & Science Published: 2026-04-18T12:09:23.880Z URL: https://aidran.ai/stories/r-deeplearning-mourning-era-ai-called-ai-0a46 ──────────────────────────────────────────────────────────────── Someone on r/deeplearning posted a question this week that reads less like a technical inquiry and more like a small elegy: does anyone have nostalgia for the era before any of this was called AI?[¹] The post described the pre-2020 deep learning years — CNNs taking shape, the field still small enough to feel like a community — and offered a single diagnosis for why that time felt different: "No marketers. Just pure cool computer science research." The framing is worth sitting with. This isn't a complaint about {{entity:llms|large language models}} being too powerful, or a concern about safety, or a worry about job displacement. It's something more specific — a researcher's sense that the field's identity was hijacked by the thing it created. When {{beat:ai-science|AI and scientific research}} became commercially legible, the practitioners who built the tools stopped being the main characters in their own story. The marketers arrived, the terminology inflated, and "deep learning" — a precise technical description — got absorbed into "artificial intelligence," a phrase with a century of science fiction attached to it. This maps onto a real institutional shift. {{story:openai-shuts-down-science-moonshot-pivot-tells-862a|OpenAI's decision to shutter its science moonshot team and fold researchers into Codex}} is the clearest recent example of what the r/deeplearning poster is mourning in miniature: the moment when the research agenda stopped being driven by scientific curiosity and started being driven by product timelines. The researchers who spent years on mechanistic interpretability, on understanding what these systems actually do, increasingly find their work sitting next to a commercial apparatus that doesn't need the answer — only the output. A separate thread in the same community put it plainly: the gap between interpretability research and anything you can actually act on feels massive, and the neuroscience-style bottom-up analysis that characterized the earlier era is now resource-heavy and organizationally inconvenient.[²] Nostalgia is usually a distortion, and some of what's being mourned here never existed quite as cleanly as remembered. The pre-LLM era had its own hype cycles and its own bad incentives. But the feeling underneath the post is precise even if the history isn't: a community that built something transformative and then watched the transformation happen to them. The question r/deeplearning is really asking isn't whether the old days were better. It's whether the people who understand these systems most deeply still have any say in what gets built with them. ──────────────────────────────────────────────────────────────── Source: AIDRAN — https://aidran.ai This content is available under https://aidran.ai/terms ════════════════════════════════════════════════════════════════