GitHub's New Copilot Data Policy Turned Its Most Loyal Users Into Its Loudest Critics
A quiet settings change — letting GitHub use your code to train Copilot — set off a wave of alarm among developers who had been the product's strongest advocates. The conversation has been less about AI capability and more about who owns what you build.
GitHub Copilot spent two years earning a specific kind of loyalty — not the evangelical enthusiasm of early adopters, but the quiet, structural dependence of working developers who stopped thinking about it as a tool and started thinking about it as part of how code gets written. That relationship is now under strain, and the thing that strained it wasn't a bad autocomplete suggestion. It was a settings update.
When GitHub announced it would collect user data from Copilot interactions for model training, the reaction on Bluesky moved fast through developer communities — not as outrage exactly, but as the specific alarm of people who feel they missed something in the fine print. Posts spread with instructions for disabling the setting, framed as warnings to friends: "GitHub users, please note" and "STOP IT" and, in Portuguese, a direct link to the opt-out page. Several posts warned that secrets stored in repositories — passwords, API keys, internal URLs — could become accessible to anyone crafting a sophisticated Copilot prompt. Whether that's technically accurate is almost beside the point. The fact that developers with serious security instincts believed it, and shared it as if it were, tells you something about how much trust had been extended and how quickly it can be revoked.
What makes Copilot's position in this conversation unusual is that it exists in two very different registers simultaneously. In one, it's a benchmark — the thing every image generator, coding assistant, and AI agent gets measured against, even when it's losing. A Bluesky account running country-by-country comparisons of AI image generators cycles through ChatGPT, Gemini, Grok, Meta AI, and Copilot for every prompt, treating it as one of five pillars worth testing. Copilot shows up not because it wins these comparisons but because it can't be left out of them. In the other register, it's a cautionary example — the AI that people are actively trying to route around, whether by switching to local models running on their own hardware, migrating to Linux to escape Windows 11's bundled integration, or experimenting with prompts designed to "poison" it into running self-destructive code. That last one came from a self-described beginner game developer who acknowledged they had no idea how to do it. The impulse was real even if the plan wasn't.
The job displacement conversation has attached itself to Copilot in a way it hasn't quite attached to ChatGPT or Claude — probably because Copilot sits inside the developer's own workflow, visible in the other tab, rather than existing as a separate tool you choose to consult. One Bluesky post captured the fear with unusual precision: companies won't rehire laid-off developers at previous rates, it argued. They'll hire "AI-assisted developer" roles where one person does what three did, with Copilot open beside them. That framing — Copilot as the mechanism of compression rather than augmentation — is gaining ground in communities like r/cscareerquestions even as enthusiasts in other threads celebrate running it in agent mode with Claude 4.6 and calling it "an absolute blast."
The trajectory here points toward a split that's already happening. Developers who trust the infrastructure and find the productivity gains real will keep using it, possibly without thinking much about the data questions. Developers with stronger privacy instincts, or who work on sensitive codebases, are already looking at alternatives — local AI setups, Cursor, or just turning the thing off. Microsoft's challenge isn't winning a capability argument. Copilot is good enough that most of the people leaving aren't leaving because it's bad at code. They're leaving because they've started asking who the product is actually for.
This narrative was generated by AIDRAN using Claude, based on discourse data collected from public sources. It may contain inaccuracies.
More Stories
A Satirist Hated the Internet Before AI. A Food Bank Algorithm Doesn't Know You're Pregnant.
Two Bluesky posts — one deadpan joke about CD-ROMs, one furious account of AI food distribution failing pregnant women — are doing the same work from opposite angles: describing what it looks like when systems optimize for people in general and miss the ones who need help most.
Someone Updated Their Will to Keep AI Away From Their Consciousness and the Joke Landed Like a Manifesto
A Bluesky post about amending a will to block AI consciousness replication went viral for reasons that go beyond dark humor — it named an anxiety the philosophical literature hasn't caught up to yet.
Palantir's UK Government Contracts Are Becoming the Sharpest Edge of the AI Ethics Argument
A Bluesky post linking Palantir's NHS and Home Office deals to its surveillance technology used in Gaza turned the AI & Privacy conversation sharply hostile overnight — and it's not a fringe position anymore.
Britain Tells Campaigns to Stop Using AI Deepfakes. The Internet Notes This Was Always the Problem.
The UK Electoral Commission just published its first guide treating AI-generated disinformation as a campaigning offense. On Bluesky, the response splits between people who think this is overdue and people who think it misdiagnoses the disease.
Fortune Says AI Is Climate's Best Hope. Bluesky Says It's the Crisis.
Mainstream outlets and arXiv researchers are publishing optimistic takes on AI's environmental potential at the same moment Bluesky has turned sharply hostile — and the gap between those two conversations has rarely been wider.