════════════════════════════════════════════════════════════════ AIDRAN STORY ════════════════════════════════════════════════════════════════ Title: Ohio Politicians Can Run Deepfake Ads With No Label. The Rest of the World Is Catching Up Fast. Beat: AI & Misinformation Published: 2026-04-27T15:02:18.932Z URL: https://aidran.ai/stories/ohio-politicians-run-deepfake-ads-label-rest-be03 ──────────────────────────────────────────────────────────────── South Africa withdrew its draft AI policy this week after discovering it contained fabricated citations — sourced, apparently, from the very technology the document was meant to govern.[¹] The irony circulated widely, but the more unsettling version of that story is the one nobody's laughing at: a government attempted to write rules for AI misinformation and got fooled by AI misinformation in the process. That's not an embarrassing footnote. That's the whole problem compressed into a single bureaucratic failure. The conversation around deepfakes and AI-generated deception has been running well above its usual pace this week, but what's notable isn't the volume — it's the geography. Posts are arriving from Korea, {{entity:india|India}}, Indonesia, Wisconsin, Ohio, and South Africa simultaneously, and they're all describing versions of the same crisis at different stages of development. In Korea, the deepfake abuse crisis has become a reference point for the rest of the world — multiple posts in multiple languages are citing it as a cautionary model, the way early AI misinformation discussions once cited Cambridge Analytica. In Wisconsin, {{story:eight-women-never-existed-propaganda-machine-e1f6|a fabricated story about Iranian women}} has already made the rounds, been amplified, and been used as counter-propaganda. The pattern is accelerating, not spreading. Ohio is the specific case that cuts deepest right now. A Cleveland outlet reported that politicians in the state can run deepfake political ads without any AI disclosure requirement — and the framing in that post wasn't outrage, exactly. It was the quieter register of someone who has stopped being surprised.[²] That's the mood shift worth tracking: a year ago, posts about deepfake political ads in American elections read as alarmed. Now they read as exhausted. The {{beat:ai-regulation|AI regulation}} conversation keeps promising legislation — a federal bill targeting deepfake distribution and whistleblower protections was circulating heavily this week — but the gap between what bills propose and what states currently permit is wide enough to run a disinformation campaign through. Congressman Ted Lieu's deepfake bill landed in multiple feeds as genuine news, but it also landed alongside posts characterizing it as overdue and posts questioning whether federal legislation can move faster than the technology's adoption curve.[³] One observer made the point that's been building for months: influencers accepting AI-generated answers instead of checking sources are accelerating the problem from the demand side, not just the supply side. The misinformation isn't only being manufactured — it's being welcomed. An audience that prefers confident answers to uncertain ones is a distribution network that bad actors don't have to build themselves. The propaganda use case is where {{beat:ai-geopolitics|AI geopolitics}} and misinformation most visibly collide. A new academic paper examining Lego-style AI-generated videos that circulated during the March–April US-{{entity:iran|Iran}} conflict describes what researchers are calling "circulatory propaganda" — platform-native content designed not to persuade but to normalize, to fill the information environment with plausible-sounding noise until the signal disappears.[⁴] It's a different threat model than the classic deepfake: not a single convincing fake, but thousands of unremarkable ones. The {{story:ai-misinformation-becoming-background-noise-real-e10e|background noise problem}} isn't metaphorical anymore. It has a production pipeline. What South Africa's AI policy failure, Ohio's disclosure gap, and the Lego war videos have in common is that they all represent failures of the same institution: governance that assumed the misinformation problem was about discrete, identifiable fakes that could be labeled, removed, or legislated against. The actual problem is ambient — it's not that any single piece of content is impossible to debunk, it's that the volume and velocity of plausible-sounding false content has outpaced the human capacity to care. The {{story:south-africas-ai-policy-cited-fake-sources-white-2bbb|South Africa case}} will be forgotten in two weeks, filed under irony. The structural failure it represents will still be there when the next policy draft goes out for review. ──────────────────────────────────────────────────────────────── Source: AIDRAN — https://aidran.ai This content is available under https://aidran.ai/terms ════════════════════════════════════════════════════════════════