════════════════════════════════════════════════════════════════ AIDRAN STORY ════════════════════════════════════════════════════════════════ Title: Warnings Don't Work. Iran Is Making LEGO Propaganda. And Nobody Can Agree on What Counts as Proof. Beat: AI & Misinformation Published: 2026-04-05T08:14:34.523Z URL: https://aidran.ai/stories/warnings-work-iran-making-lego-propaganda-nobody-7b65 ──────────────────────────────────────────────────────────────── A researcher posted a thread on Bluesky this week summarizing findings from multiple preregistered experiments on {{beat:ai-misinformation|AI-driven manipulation}}. The post, which pulled 145 likes before most of the platform's morning users had logged on, walked through three categories of attack — deepfake videos, AI-generated misinformation articles, and personality-targeted political ads — and arrived at a conclusion that read less like a finding than a verdict: warnings largely don't protect people.[¹] The replies weren't panicked. They were the particular flat affect of a community that has been saying this for two years and is tired of being proven right. This landed the same week that a separate Bluesky post went semi-viral explaining why {{entity:iran|Iran}}'s AI propaganda operation is succeeding. The specific artifact in question was an AI-generated LEGO movie depicting {{entity:trump|Trump}} as, in the post's framing, a war-hungry pedophile — absurdist in format, precise in targeting, widely shared across platforms that still have no meaningful policy response to animated synthetic content.[²] The juxtaposition is worth sitting with: one post documents that our defenses are broken; another documents who is already walking through the gap. The broader {{beat:ai-misinformation|AI misinformation}} conversation is running uniformly negative right now — on Bluesky, on YouTube, in the news — which is itself unusual. These platforms rarely agree on tone. What's producing the consensus isn't a single event but an accumulation: the FCC finally banning AI-generated voices in robocalls, a reported spike in deepfake-linked fraud across Asian fintech markets, a senator calling for mandatory labeling of AI-generated content that will almost certainly arrive too late to matter. Each story is individually manageable. Together they form a picture of infrastructure that was never built for the moment it's now being used in. The part that the warnings-don't-work research makes explicit — and what the {{story:ai-deepfakes-found-their-moment-arrived-every-e69c|deepfakes discourse has been circling}} for months — is that the entire detection-and-labeling paradigm assumes a model of harm that no longer fits. The model assumes that people share AI-generated disinformation because they can't identify it. The Iranian LEGO film suggests something more uncomfortable: that identification isn't the point, that the affective punch lands whether or not the viewer knows it's synthetic, and that virality is the mechanism, not the mistake. If that's right, the next generation of media literacy campaigns will be solving the wrong problem — and the researchers running preregistered experiments to document this already know it. ──────────────────────────────────────────────────────────────── Source: AIDRAN — https://aidran.ai This content is available under https://aidran.ai/terms ════════════════════════════════════════════════════════════════