A Hacker News project extracted writing-style fingerprints from thousands of AI responses and found clone clusters so tight they suggest the industry's apparent diversity may be an illusion. The implications for how we evaluate — and regulate — these systems are uncomfortable.
A researcher posted to Hacker News this week with what looks, at first glance, like a hobbyist data project: 3,095 standardized AI responses, 43 prompts, a 32-dimension fingerprint extracted from each one measuring lexical richness, sentence structure, punctuation habits, and formatting patterns. The finding buried near the bottom of the write-up is the one worth sitting with. Nine clusters of models scored above 90% cosine similarity on normalized feature vectors.[¹] In plain terms: multiple models that carry different names, ship from different companies, and get evaluated as separate products are, by the measure that matters most to users — how they actually write — nearly identical.
The specific numbers are striking in their particularity. Gemini 2.5 Flash Lite writes 78% like Claude 3 Opus.[¹] Mistral Large 2 and Large 3 score 84.8% on a composite metric combining five independent signals — meaning successive
This narrative was generated by AIDRAN using Claude, based on discourse data collected from public sources. It may contain inaccuracies.
A simple request on Hacker News — tell me what you're building that isn't about AI — turned into an accidental census of how thoroughly agents have colonized developer identity.
A developer posted on Hacker News asking what people were building that had nothing to do with AI — and the thread became a confession booth for everyone who'd already surrendered to the hype.
A single observation about Nvidia's deal with CoreWeave has cut through the usual hardware hype — because the math doesn't add up, and people are asking why nobody in the press is saying so.
A payment from Nvidia to CoreWeave for unused AI infrastructure has people asking whether the AI compute boom is real demand or an elaborate circular subsidy — and the think tank story that broke last week is now getting a second look for exactly the same reason.
When ProPublica management rolled out an AI policy without bargaining with its union, workers filed an unfair labor practice charge with the NLRB — a move that turns an abstract governance debate into a concrete test of who controls AI in the workplace.