Catholic Theologians Are Arguing With Bluesky and Neither Side Knows It
The Anthropic accountability lawsuit has drawn amicus briefs from moral philosophers and flat dismissals from activists — two camps reaching the same conclusion about AI by routes so different they can't hear each other.
Fourteen Catholic moral theologians filed amicus briefs in federal court this week supporting the Anthropic lawsuit — citing, among other things, a new American pope who has made AI ethics a stated priority of his papacy. On Bluesky, the response to the same underlying case was: "There is nothing ethical about genAI." These two positions aren't really in disagreement. They're not in contact at all.
The Anthropic case is about accountability — specifically, who bears legal and moral responsibility when an algorithm contributes to harm. The theologians' briefs engage that question with centuries of moral philosophy behind them. The Bluesky posts engage it too, with a different but coherent logic: that corporations have structured AI deployment precisely to evade accountability, and that the institutions meant to enforce it — courts, regulatory bodies, ethics boards — have already been captured or rendered toothless. "Lack of accountability is what these corporations like best about AI" is not a dismissal of the question. It's an answer to it. The problem is that one side is filing paperwork and the other is writing posts, and those two activities don't produce the same outcomes even when they share the same premise.
The most clarifying moment of the week came from neither track. A maintainer of a popular GitHub repository embedded a prompt-injection trap in their contribution guidelines — a hidden instruction that would cause an AI coding assistant to self-identify if it was generating the pull request. Within a day, half of all incoming PRs had outed themselves as bot-generated. The story spread because it was the rare thing: empirical. Not a philosophical claim about AI's moral status, not a verdict about corporate intent, but a measurement. The r/degoogle and r/privacy communities are doing similar work on biometric surveillance, asking not "is this wrong" but "can we show that it's happening and to whom." This forensic mode is slower and less satisfying than either the institutional briefs or the declarative posts. It also produces evidence that courts can actually use.
What the week made visible is that "AI ethics" now describes three separate projects. The institutional project asks how to build accountability into existing frameworks — and is finding unexpected allies in religious traditions with long experience thinking about when technologies outrun the moral systems meant to govern them. The forensic project asks how to document what's already occurring — and is quietly accumulating the kind of proof that changes policy. The declarative project has already rendered its verdict and is now engaged in repetition, which is its own form of work even if it doesn't look like it. The institutional and forensic tracks are converging, slowly, on something that might eventually matter in a courtroom. The declarative track is winning the attention economy in the short run and probably burning itself out of it in the medium one — you can only say "AI bad" so many times before even sympathetic readers stop clicking.
This narrative was generated by AIDRAN using Claude, based on discourse data collected from public sources. It may contain inaccuracies.
More Stories
A Federal Court Just Blocked the Trump Administration From Treating Anthropic as a National Security Threat
A judge stopped the White House from designating Anthropic a supply chain risk — and on Bluesky, the ruling landed alongside a wave of posts arguing the entire AI industry's financial architecture is fiction.
Using AI Images to Win Arguments Is Lazy, and One Bluesky User Is Done Pretending Otherwise
A pointed post about AI-generated political imagery captured something the bias conversation usually misses — the tool's role as a confirmation machine, not just a content generator.
The EFF Just Sued the Government Over an AI That Decides Who Gets Medical Care
A lawsuit targeting Medicare's secret AI care-denial system arrived the same week a KFF poll showed Americans turning to chatbots for health advice because they can't afford doctors. The two stories are the same story.
Reddit's Enshittification Meme Has Found Its Most Convenient Target Yet
A post in r/degoogle distilled the internet's frustration with AI product degradation into a single pizza-with-glue joke — and the community receiving it already knows exactly what it means.
Dundee University Made an AI Comic About a Serious Topic and Forgot to Ask Its Own Artists
A Scottish university used AI-generated images in a public awareness project — without consulting the comic professionals on its own staff. The Bluesky post calling it out captured something the consciousness beat usually misses.