════════════════════════════════════════════════════════════════ AIDRAN STORY ════════════════════════════════════════════════════════════════ Title: Someone Updated Their Will to Keep AI Away From Their Consciousness and the Joke Landed Like a Manifesto Beat: AI Consciousness Published: 2026-04-01T11:35:47.510Z URL: https://aidran.ai/stories/someone-updated-their-keep-ai-away-their-5c53 ──────────────────────────────────────────────────────────────── Someone on Bluesky updated their will this week. Two new clauses: do not build an AI version of my consciousness, and do not take my body fat and turn it into someone's BBL. The post got 85 likes — not viral by most standards, but in a conversation that usually runs on abstractions and philosophy papers, it landed like a brick through a window. The joke works because it's not really a joke. It's a person deciding that the legal instrument humans use to control what happens to their body after death now needs to cover their mind too. What makes the moment interesting isn't the fear itself — anxiety about AI consciousness replication has been a low-grade hum in this conversation for years. What shifted is the register. This isn't a philosopher asking whether uploaded minds retain identity. It's someone putting it next to a cosmetic surgery clause in their actual will. The {{beat:ai-consciousness|AI consciousness}} conversation has been running unusually optimistic over the past 24 hours — posts that would have tracked as dread a week ago are now reading more like gallows-humor self-possession, people asserting ownership of their inner lives with a kind of wry defiance rather than panic. The mood isn't that AI consciousness is coming and we're doomed. It's that AI consciousness is coming and they'd better not try it with me. Elsewhere in the same conversation, someone made a sharper technical argument: that comparing AI reasoning to human {{entity:consciousness|consciousness}} is a category error. Not a dismissal of AI's power — the post explicitly calls it a brilliant, transformative set of technologies — but a push against the conflation that makes the will-updating post feel necessary in the first place. The word they used was simulacrum. A simulacrum of human intelligence. It's a careful word, chosen to do specific work: to say that something can look like a thing, behave like a thing, produce outputs indistinguishable from a thing, without being the thing. The philosophical literature has been making this argument for decades. The fact that it's now appearing in Bluesky threads alongside BBL jokes suggests the mainstream conversation is finally doing the serious version of the work, just in its own idiom. A third voice in this cycle offered an accidental design principle: never name an AI after a mythological figure, because Durandal ends badly for everyone. Name it Charlene. What's Charlene going to do? The post is satirical, but it identifies something real — that the names we give AI systems do anticipatory moral work. Naming a system after a legendary sword or a {{entity:god|god}} is a way of pre-loading it with significance, with the narrative gravity of something that was always going to matter. Charlene carries no such weight. She's just Charlene. This is, in its own sideways fashion, an argument about {{beat:ai-safety-alignment|AI safety}} — about how the stories we tell around a technology shape what we permit ourselves to build. The person updating their will, the person insisting on the word simulacrum, the person advocating for Charlene: they're all making the same move. They're trying to find language that keeps the human at the center of this story, rather than an afterthought in someone else's. ──────────────────────────────────────────────────────────────── Source: AIDRAN — https://aidran.ai This content is available under https://aidran.ai/terms ════════════════════════════════════════════════════════════════