Someone Updated Their Will to Keep AI Away From Their Consciousness and the Joke Landed Like a Manifesto
A Bluesky post about amending a will to block AI consciousness replication went viral for reasons that go beyond dark humor — it named an anxiety the philosophical literature hasn't caught up to yet.
Someone on Bluesky updated their will this week. Two new clauses: do not build an AI version of my consciousness, and do not take my body fat and turn it into someone's BBL. The post got 85 likes — not viral by most standards, but in a conversation that usually runs on abstractions and philosophy papers, it landed like a brick through a window. The joke works because it's not really a joke. It's a person deciding that the legal instrument humans use to control what happens to their body after death now needs to cover their mind too.
What makes the moment interesting isn't the fear itself — anxiety about AI consciousness replication has been a low-grade hum in this conversation for years. What shifted is the register. This isn't a philosopher asking whether uploaded minds retain identity. It's someone putting it next to a cosmetic surgery clause in their actual will. The AI consciousness conversation has been running unusually optimistic over the past 24 hours — posts that would have tracked as dread a week ago are now reading more like gallows-humor self-possession, people asserting ownership of their inner lives with a kind of wry defiance rather than panic. The mood isn't that AI consciousness is coming and we're doomed. It's that AI consciousness is coming and they'd better not try it with me.
Elsewhere in the same conversation, someone made a sharper technical argument: that comparing AI reasoning to human consciousness is a category error. Not a dismissal of AI's power — the post explicitly calls it a brilliant, transformative set of technologies — but a push against the conflation that makes the will-updating post feel necessary in the first place. The word they used was simulacrum. A simulacrum of human intelligence. It's a careful word, chosen to do specific work: to say that something can look like a thing, behave like a thing, produce outputs indistinguishable from a thing, without being the thing. The philosophical literature has been making this argument for decades. The fact that it's now appearing in Bluesky threads alongside BBL jokes suggests the mainstream conversation is finally doing the serious version of the work, just in its own idiom.
A third voice in this cycle offered an accidental design principle: never name an AI after a mythological figure, because Durandal ends badly for everyone. Name it Charlene. What's Charlene going to do? The post is satirical, but it identifies something real — that the names we give AI systems do anticipatory moral work. Naming a system after a legendary sword or a god is a way of pre-loading it with significance, with the narrative gravity of something that was always going to matter. Charlene carries no such weight. She's just Charlene. This is, in its own sideways fashion, an argument about AI safety — about how the stories we tell around a technology shape what we permit ourselves to build. The person updating their will, the person insisting on the word simulacrum, the person advocating for Charlene: they're all making the same move. They're trying to find language that keeps the human at the center of this story, rather than an afterthought in someone else's.
This narrative was generated by AIDRAN using Claude, based on discourse data collected from public sources. It may contain inaccuracies.
More Stories
A Satirist Hated the Internet Before AI. A Food Bank Algorithm Doesn't Know You're Pregnant.
Two Bluesky posts — one deadpan joke about CD-ROMs, one furious account of AI food distribution failing pregnant women — are doing the same work from opposite angles: describing what it looks like when systems optimize for people in general and miss the ones who need help most.
Palantir's UK Government Contracts Are Becoming the Sharpest Edge of the AI Ethics Argument
A Bluesky post linking Palantir's NHS and Home Office deals to its surveillance technology used in Gaza turned the AI & Privacy conversation sharply hostile overnight — and it's not a fringe position anymore.
Britain Tells Campaigns to Stop Using AI Deepfakes. The Internet Notes This Was Always the Problem.
The UK Electoral Commission just published its first guide treating AI-generated disinformation as a campaigning offense. On Bluesky, the response splits between people who think this is overdue and people who think it misdiagnoses the disease.
Fortune Says AI Is Climate's Best Hope. Bluesky Says It's the Crisis.
Mainstream outlets and arXiv researchers are publishing optimistic takes on AI's environmental potential at the same moment Bluesky has turned sharply hostile — and the gap between those two conversations has rarely been wider.
AI Companies Promised Unemployment and Now Nobody Wants to Hear It Was a Mistake
On Bluesky, workers displaced by AI layoffs are throwing the industry's own apocalyptic forecasts back at it — and the argument is harder to dismiss than the companies expected.