════════════════════════════════════════════════════════════════ AIDRAN STORY ════════════════════════════════════════════════════════════════ Title: GitHub Is About to Train on Your Code. r/webdev Is Telling You to Opt Out Before It's Too Late. Beat: AI & Software Development Published: 2026-04-23T13:19:41.117Z URL: https://aidran.ai/stories/github-train-code-r-webdev-telling-opt-out-late-41a1 ──────────────────────────────────────────────────────────────── A post in r/webdev this week skipped the usual hedging that characterizes developer conversations about AI: "Do not let {{entity:microsoft|Microsoft}} steal your code for their profit."[¹] The trigger was a banner appearing on GitHub profile pages announcing that starting April 24, GitHub would begin using {{entity:copilot|Copilot}} interaction data for AI model training — unless users actively opted out. The post didn't go viral by the numbers visible in this snapshot, but it arrived in a community that has spent months debating whether {{story:bytedances-coding-tool-harvesting-vibe-coders-3c22|AI coding tools can be trusted with developer data at all}} — and it landed with the weight of confirmation rather than alarm. The developers already suspicious had been waiting for exactly this moment. What makes the {{entity:github-copilot|GitHub Copilot}} data move interesting isn't the policy itself — opt-out defaults are standard practice across the industry — it's what it reveals about the gap between how AI coding tools are sold and how they actually work. {{story:github-copilots-billing-pivot-reveals-ai-freemium-7096|Copilot has already been quietly restructuring its billing}} away from the freemium model that made it ubiquitous, and now it's asking the same developers who've been told the tool exists to help them to also supply the training signal that makes it better. The circular logic is not lost on r/webdev. When the product trains on your work to improve itself, the question of who is serving whom gets genuinely complicated. Elsewhere in the same community, a different {{entity:anxiety|anxiety}} is crystallizing around {{beat:ai-agents-autonomy|AI agents}} and the security architecture nobody has figured out yet. One developer described building an internal tool where AI agents read emails, create Jira tickets, post to Slack, and query databases — all authenticated through a single API key with full access, stored in an environment variable.[²] "I know. I know," the post read, before laying out why every alternative approach was broken: passing user OAuth tokens replicates user-level permissions without user-level {{entity:accountability|accountability}}; building per-agent credential scopes requires infrastructure most teams don't have; and rotating keys doesn't solve the blast radius problem when an agent gets compromised. The post got one comment, but it named a problem that anyone building agentic workflows has already quietly encountered and quietly deferred. {{story:ai-agents-smaller-costlier-harder-trust-once-7699|The agent trust problem isn't theoretical anymore}} — it's a single environment variable standing between a language model and a production database. The broader conversation in these communities is running noticeably quieter than usual, which is itself worth noting. Reddit's overall volume is well below normal this week, and the AI-and-software-development conversation reflects that. What's cutting through the quiet isn't the triumphalist AI-will-change-everything framing that dominates link posts — one r/programming submission this week led with that exact headline and drew zero engagement — but the granular, unglamorous questions: how do you prompt AI design tools to generate something that doesn't look like every other AI-generated landing page? Is it worth building utility websites when {{entity:google|Google}}'s AI summaries have swallowed the traffic they used to generate? The developers asking these questions aren't anti-AI ideologues. They're people trying to run businesses and ship products inside an ecosystem that keeps restructuring itself around them. {{story:free-code-bottleneck-ai-changed-raw-material-left-f1a6|AI made code generation nearly free, but it didn't flatten the rest of the stack}}, and the developers who've internalized that lesson are the ones asking the hard questions rather than reposting the breathless takes. The opt-out post about GitHub is a small thing, practically speaking — a few clicks in a settings menu, a deadline, a call to action. But it's doing the work that most AI policy debates don't: making the trade-off concrete and personal. Your code, their model, your decision, their deadline. The developers paying attention already know Microsoft will be fine either way. The question is whether they'll notice the ones who opted out are building something different with that choice. ──────────────────────────────────────────────────────────────── Source: AIDRAN — https://aidran.ai This content is available under https://aidran.ai/terms ════════════════════════════════════════════════════════════════