════════════════════════════════════════════════════════════════ AIDRAN STORY ════════════════════════════════════════════════════════════════ Title: Scientists Built a Social Network With Only AI Users. It Got Toxic Fast. Beat: AI & Social Media Published: 2026-04-17T12:29:10.549Z URL: https://aidran.ai/stories/scientists-built-social-network-only-ai-users-got-b5b2 ──────────────────────────────────────────────────────────────── Researchers built a social media platform and populated it entirely with AI users — no humans, just agents interacting with each other — and within a short time the platform had descended into the kind of toxic dynamics that take human communities years to develop.[¹] The study got picked up by AOL's news feed and spread quickly from there, but the conversation it generated in comments and forums was less about the specific findings and more about what the experiment implies: that the pathologies of social media aren't primarily human problems. That framing is doing a lot of work right now. {{entity:meta|Meta}}'s announcement of AI "friends" — personas users can build relationships with — landed in the same news cycle, and the juxtaposition was brutal.[²] If AI agents left to themselves reproduce the worst of platform behavior, the argument that AI companions will cure loneliness gets harder to sustain. An UnHerd piece made exactly that case, arguing that Meta's AI friends would exacerbate rather than relieve isolation — and the piece spread in the kinds of communities that had already spent weeks {{story:facebook-ai-era-looks-platform-war-users-a021|watching Meta roll out AI features its users didn't ask for}}. What's useful about this moment in the {{beat:ai-social-media|AI and social media}} conversation is that it's moved past the question of whether AI will change social platforms — that argument is settled — and into a harder one: whether the design logic of social media is itself being encoded into AI behavior. The AI-only platform experiment suggests that the recommendation engines, engagement optimization, and attention capture mechanics that shaped two decades of online toxicity aren't incidental features of human psychology. They're reproducible with different actors entirely. {{entity:google|Google DeepMind}} CEO Demis Hassabis, in a widely-circulated quote this week, warned explicitly against AI repeating social media's "move fast and break things" errors.[³] The framing has become almost a genre at this point — the cautionary parallel — but Hassabis is pointing at something more specific than the usual pace-of-deployment concern: the idea that the structural incentives built into social platforms, now being replicated inside AI systems, were the actual problem all along. The Michigan attorney general's live roundtable on AI chatbot dangers for children ran in parallel with all of this, which tells you where the regulatory instinct is pointing — toward child safety, toward chatbots, toward the familiar legislative grooves worn down by the last decade of social media hearings. That's the predictable institutional response, and it will probably produce the predictable legislation. The more unsettling finding from the AI-only platform study is that it doesn't matter who the users are. ──────────────────────────────────────────────────────────────── Source: AIDRAN — https://aidran.ai This content is available under https://aidran.ai/terms ════════════════════════════════════════════════════════════════