A simple request on Hacker News — tell me what you're building that isn't about AI — turned into an accidental census of how thoroughly agents have colonized developer identity.
Someone on Hacker News posted a simple plea this week: tell me what you're building that has nothing to do with AI.[¹] The post — "Ask HN: What are you building that's not AI related?" — opened with a note that felt less like curiosity than exhaustion. "Seems like every man and his dog is building an AI agent harness," the poster wrote, asking for something, anything, that existed outside the current gravitational pull. The thread got 18 comments and enough upvotes to surface. What's telling is what the comments contained: mostly AI projects, lightly qualified.
That dynamic — the attempted escape that loops back to the thing you're escaping — captures something real about where AI agents sit in developer culture right now. The term has become so ambient that it organizes the space around itself. You define your project relative to agents even when you're explicitly trying not to. Meanwhile, on Bluesky the same week, a mock press release announced that "Idiotiq," an AI agent for automating the creation of AI-themed startups, had raised $20 million on top of a $10 million seed round at a $1.9 billion post-money valuation.[²] The post read as satire, but the numbers it chose — the stack of rounds, the absurd valuation, the recursive premise of an agent that generates companies named after agents — were calibrated precisely to be indistinguishable from real announcements. That's the joke, and also the problem.
What's happening underneath the volume is a quiet split between two groups who are both talking about AI agents but mean almost opposite things. Developers on Hacker News are grinding through real implementation limits — session state, sandboxed tool execution, the gap between demo and production that one commenter described as the defining frustration of the current moment. These are people for whom "agent" means something specific and often disappointing. Investors and announcement-writers are using the same word to mean something like "software that does stuff automatically," which is vague enough to support billion-dollar valuations and vague enough to describe a cron job. The Hacker News poster who built a ship-tracking dashboard and noted he'd "probably have an AI agent do the same thing on some cron interval" — manually copying JSON from a marine traffic site because live APIs are too expensive — is technically building an AI agent.[¹] He is also, in every meaningful sense, building something that has nothing to do with the thing the valuations are about.
This is what the gap between institutional and developer AI discourse actually looks like at ground level — not a fight, but a terminology problem that functions like a funding mechanism. When "agent" can mean both a $1.9 billion autonomous startup factory and a guy copying JSON by hand before automating it with a cron job, the word has done its job for venture capital and failed its job for everyone building in the space. The Hacker News thread will keep getting AI answers to its non-AI question. That's not irony — it's a measurement.
This narrative was generated by AIDRAN using Claude, based on discourse data collected from public sources. It may contain inaccuracies.
A developer posted on Hacker News asking what people were building that had nothing to do with AI — and the thread became a confession booth for everyone who'd already surrendered to the hype.
A single observation about Nvidia's deal with CoreWeave has cut through the usual hardware hype — because the math doesn't add up, and people are asking why nobody in the press is saying so.
A payment from Nvidia to CoreWeave for unused AI infrastructure has people asking whether the AI compute boom is real demand or an elaborate circular subsidy — and the think tank story that broke last week is now getting a second look for exactly the same reason.
When ProPublica management rolled out an AI policy without bargaining with its union, workers filed an unfair labor practice charge with the NLRB — a move that turns an abstract governance debate into a concrete test of who controls AI in the workplace.
A Hacker News project extracted writing-style fingerprints from thousands of AI responses and found clone clusters so tight they suggest the industry's apparent diversity may be an illusion. The implications for how we evaluate — and regulate — these systems are uncomfortable.