Paying to Be the Product: The AI Privacy Illusion Nobody's Talking About

dev.to

In tech, we've always said: "If you're not paying for the product — you are the product." In today's AI reality, that statement needs an update: we are paying to be the product.


I spend my days working in identity and security at Microsoft. But when I go home, I'm just another consumer — using AI assistants across every major platform, paying for premium tiers, trusting that my subscription buys me not just better answers, but basic respect for my data.

That trust led me down a rabbit hole I didn't expect.

The Moment That Started This

A few weeks ago, I was reviewing the privacy settings on Google Gemini Advanced — a service I pay for — and I realized something unsettling. To opt out of having my conversations used for AI model training, I had to turn off "Keep Activity". Sounds reasonable, right?

Except turning off Keep Activity doesn't just stop training. It disables everything: chat history, personalization, context across sessions, the ability to revisit old conversations. Every interaction becomes a blank slate. The AI assistant I'm paying a premium for becomes, functionally, lobotomized.

The choice Google presents to its paying customers is:

Option A: Full-featured Gemini that uses your data to train models.
Option B: A crippled Gemini that remembers nothing — but hey, your data isn't used for training. Mostly.

I'll come back to that "mostly" in a moment.

This bothered me enough that I decided to audit the privacy policies of every major AI assistant I use. Not surface-level marketing claims — the actual terms, the actual privacy notices, the actual small print.

What I found was... illuminating.

The Audit: Six Providers, One Question

The question was simple: As a paying individual consumer, can I opt out of my data being used to train AI models without degrading the service I'm paying for?

I examined: Google Gemini, OpenAI ChatGPT, Anthropic Claude, Microsoft Copilot, Mistral Le Chat, and Perplexity.

The Good News

Every single provider now offers some form of opt-out mechanism for consumers. That's progress. A year or two ago, that wasn't the case across the board.

The Bad News

The devil is in the details — and those details vary wildly.


What the Small Print Actually Says

Google Gemini: The Opt-Out That Costs You the Product

Google's Gemini Apps Privacy Notice deserves close reading — because the more you read, the less clear it becomes.

The only mechanism to opt out of AI model training is to turn off "Keep Activity". Here's what that actually does:

  • ❌ Your chat history is no longer saved — chats are retained for 72 hours, then deleted
  • No personalization — Gemini doesn't learn from your past conversations
  • No context across sessions — every chat starts from zero
  • You can't go back to old conversations — they're gone after 72 hours

On every other platform I tested, opting out of training is a standalone toggle. You keep your history, your personalization, your context. On Gemini, opting out of training means opting out of the product being useful.

But it gets more interesting. Google's own documentation appears to contradict itself on the same page.

In the "Configuring your settings" section, the Privacy Notice states:

"The settings in Gemini Apps Activity **don't control processing of your chats to create anonymized data* to improve Google services."*

This suggests anonymized data still feeds into service improvement — which Google defines elsewhere on the same page as including "generative AI models and other machine-learning technologies."

Yet in the FAQ section on the very same page, under "What does the Keep Activity setting control?":

"If Keep Activity is off and you don't submit feedback, Google also **does not use your future chats to improve its AI models."

These two statements are in tension. Does anonymized data still get used for model improvement when Keep Activity is off, or doesn't it? Google's own privacy documentation gives you both answers on the same page.

And regardless of which interpretation you trust, one clause remains unambiguous — under "How long we retain your data":

"Chats reviewed by human reviewers (and related data like your language, device type, location info, or feedback) **are not deleted when you delete your activity. Instead, they are retained for **up to three years."

To summarize: the only opt-out available degrades your paid product to near-uselessness, the documentation contradicts itself on whether anonymized data is still used, and human-reviewed chats persist for three years even after you delete them.

As a paying customer, this doesn't feel like a privacy control. It feels like an illusion of control — wrapped in contradictory language that would require a lawyer to parse.

📄 Gemini Apps Privacy Notice · Google Privacy Policy


OpenAI ChatGPT: Clean Opt-Out, One Loophole

OpenAI's approach is significantly better. The "Improve the model for everyone" toggle under Settings > Data Controls is independent of your chat history. You can keep your conversations, keep your context, and still opt out of training.

But there's a catch, documented in their own help center:

"Even if you have opted out of training, you can still choose to provide feedback... If you choose to provide feedback, **the entire conversation associated with that feedback may be used to train our models."

That thumbs-up button you casually tap? It potentially feeds your entire conversation into the training pipeline — even if you've opted out.

📄 OpenAI Terms of Use · How your data is used to improve model performance


Anthropic Claude: Honest About Its Exceptions

Anthropic updated its consumer terms in August 2025, and to their credit, they're transparent about what opt-out does and doesn't cover. From Section 4 of their Consumer Terms:

"We may use Materials to provide, maintain, and improve the Services... **unless you opt out* of training through your account settings. Even if you opt out, we will use Materials for model training when: (1) you provide Feedback to us regarding any Materials, or (2) your Materials are flagged for safety review..."*

Two explicit carve-outs. But at least they tell you upfront, and the opt-out doesn't degrade your service.

📄 Anthropic Consumer Terms · Anthropic Privacy Policy


Microsoft Copilot: Separate Toggles, No Training Carve-Outs

Full disclosure: I work at Microsoft. I'm including Copilot because excluding it would be intellectually dishonest, and my job doesn't exempt me from critical evaluation.

Copilot offers independent toggles for personalization, memory, and model training. The Copilot Privacy Controls page states:

"Opting out will exclude your future conversation activities from being used for training these AI models."

There's a caveat that data may still be used for "general product or system improvements... digital safety, security, and compliance" — but this is explicitly separated from model training.

📄 Microsoft Services Agreement · Copilot Privacy Controls


Mistral Le Chat: The GDPR Advantage

Being a French/EU company subject to GDPR, Mistral's approach is notably clean. The training opt-out is a simple toggle — independent of chat functionality. Paid Pro users are opted out by default.

The one caveat: user feedback (ratings/comments) is always used regardless of your opt-out setting. But chat content, uploaded documents, and conversation history are respected.

📄 Mistral Terms of Service · Mistral Opt-Out FAQ


Perplexity: The Wildcard

Perplexity offers an opt-out toggle for AI data retention, but the company is currently facing a class-action lawsuit alleging that user data — including conversations in "Incognito" mode — was shared with Meta and Google for ad targeting regardless of privacy settings.

The lawsuit is ongoing, and allegations aren't findings. But it's a reminder that a toggle in a settings page is only as trustworthy as the infrastructure behind it.

📄 Perplexity Terms of Service · Perplexity Privacy Policy


The Summary That Should Concern You

Provider Opt-out degrades service? Data still used after opt-out? Exceptions
Google Gemini ❌ Yes — kills history & personalization ⚠️ Contradictory — documentation says both yes and no 3-year retention of reviewed chats; contradictory policy language
OpenAI ChatGPT ✅ No ⚠️ Feedback triggers training Thumbs up/down = full conversation
Anthropic Claude ✅ No ⚠️ Feedback + safety flags Explicitly documented
Microsoft Copilot ✅ No ✅ No carve-outs for training Safety/compliance retention only
Mistral Le Chat ✅ No ✅ No (paid users auto-opted-out) Feedback always used
Perplexity ✅ No ⚠️ Under litigation Alleged sharing despite settings

Where Is the EU on This?

Here's what genuinely surprises me.

I've spent years grumbling about EU cookie consent banners. Every website, every visit, the same tedious popups — all in the name of protecting consumer privacy for a few tracking pixels.

Yet here we are in 2026, and AI providers are training models on paying consumers' conversations with opt-out mechanisms that range from "genuine but imperfect" to "functionally deceptive" — and the regulatory silence is deafening.

The GDPR was designed for exactly this scenario. Article 21 grants the right to object to data processing. Article 7 requires that consent be as easy to withdraw as it is to give. When Google forces you to choose between a functional product and privacy, is that really free consent?

I'm not calling for more regulation for the sake of it — heaven knows we have enough cookie banners. But if Europe's privacy framework means anything, this is precisely where it should be applied. Not to cookies. To the AI models being trained on our most intimate conversations with technology.


What I Want You to Take Away

This isn't a "don't use AI" article. I use all of these tools daily. They're transformative. But:

  1. Check your settings. Today. Most providers default to training-on. The opt-out exists, but you have to find it and flip it yourself.

  2. Read the exceptions. An opt-out toggle means nothing if the small print carves out half your data anyway. Know what "opt-out" actually means for each provider you use.

  3. Be thoughtful about feedback. That casual thumbs-up on a response? On some platforms, it opens your entire conversation to training — even if you've opted out.

  4. Evaluate whether your paid tier actually buys you privacy. For some providers, it does. For others, you're just paying for better answers while your data flows into the same training pipeline as free users.

  5. Demand better. As consumers, as technologists, as an industry. The right to use a product you pay for without surrendering your conversations to model training shouldn't be a premium feature. It should be the default.


The research in this article reflects publicly available terms of service and privacy policies as of April 2026. All quotes are sourced directly from official provider documentation, linked inline. Views expressed are my own and do not represent my employer.

Source: dev.to

arrow_back Back to News