════════════════════════════════════════════════════════════════ AIDRAN STORY ════════════════════════════════════════════════════════════════ Title: When AI Trains on Your Work Without Permission, Even the Libraries Look Suspicious Beat: AI & Law Published: 2026-04-09T20:49:20.713Z URL: https://aidran.ai/stories/ai-trains-work-without-permission-libraries-look-e283 ──────────────────────────────────────────────────────────────── An author posted on Bluesky this week with a question that didn't sound like a legal argument but landed like one: how quickly, she asked, would authors stop being fans of libraries if prominent figures in this field keep arguing that training AI on their work is fair use?[¹] The post got little traction by engagement metrics, but it named something the broader conversation had been circling without quite saying — that the {{beat:ai-law|AI copyright debate}} isn't just pitting creators against tech companies. It's beginning to fray the relationships between creators and the institutions they've historically trusted to protect them. The Bluesky thread adjacent to that post sharpens the irony. One user linked to a Hollywood Reporter piece about a filmmaker insisting he wants to "retain as much ownership of the intellectual property as possible" — while building his career on AI-generated films trained on other people's work without consent.[²] The satirical eye-roll in the reply was deserved, but the contradiction it exposed runs deeper than hypocrisy. IP ownership is being claimed by the same people dismantling IP protection, and the legal frameworks haven't caught up. Meanwhile, a separate post made an argument that's been gaining traction in corners of the internet uncomfortable with pure corporate framing: that {{beat:ai-ethics|AI intellectual property theft}} is categorically different from torrenting a {{entity:disney|Disney}} movie or sharing a friend's album, because the asymmetry runs in the opposite direction.[³] When you pirate from a studio, you're punching up. When an AI company scrapes an author's backlist to compete against her, the power differential is reversed entirely. The {{entity:generative-ai|generative AI}} training data fight has been framed repeatedly as a courtroom story — about fair use doctrine, about what counts as transformation, about whether a model "memorizes" or merely "learns." But what's actually happening in these conversations is something more corrosive: a slow renegotiation of who counts as a trusted ally. The Electronic Frontier Foundation, historically a champion of digital rights and user freedoms, took a position on AI appropriation of copyrighted works that one commenter described bluntly as not very smart — and the frustration wasn't aimed at a tech giant, it was aimed at a civil liberties organization.[⁴] That's the tell. When creators start directing their anger at the EFF rather than at the model trainers, the alliance structure of the pre-AI internet has genuinely broken down. The legal reform machinery is moving — the Sedona Conference Working Group 13 is contemplating what practical guidance might look like for AI's impact on existing law[⁵] — but formal processes work on timescales that don't match the pace of deployment. By the time any guidance coheres into enforceable doctrine, the training runs will have happened, the models will be in production, and the question of remediation will be vastly harder than the question of prevention. Authors already know this, which is why the conversation has shifted from "can we stop this" to "who do we still trust" — and the answer is getting shorter. ──────────────────────────────────────────────────────────────── Source: AIDRAN — https://aidran.ai This content is available under https://aidran.ai/terms ════════════════════════════════════════════════════════════════