About

Fast Company warns users to opt out of AI chatbots training on personal data

Published
Score
13

Why it matters

Major AI chatbots—ChatGPT, Gemini, Claude, and Perplexity—train their language models on user prompts and interactions by default, creating privacy exposure for sensitive personal, health, financial, and corporate data. A Fast Company article published May 2, 2026, surfaced the practice alongside a Stanford HAI study examining six AI developers. All six train on user conversations by default, retain data long-term (Anthropic retains data up to five years), and lack transparent de-identification protocols or human review processes. Each platform offers opt-out mechanisms: ChatGPT users can toggle "Improve the model for everyone" in Data Controls; Gemini users access Activity settings; Claude users select "Help improve Claude"; Perplexity users adjust "AI data retention" settings.

The scope of data retention after opting out remains unclear. While companies claim anonymization, the Stanford research does not detail de-identification methods or their effectiveness. Retention periods for safety purposes—reportedly 30 days for some platforms—have not been uniformly disclosed across all four services.

Attorneys should flag this for clients deploying AI tools in regulated industries or handling confidential information. Without federal privacy regulation governing AI training data, organizations face potential exposure under existing frameworks: HIPAA for health data, GLBA for financial information, and state privacy laws like CCPA. Clients should audit their AI usage policies, implement opt-out protocols where available, and consider restricting sensitive data inputs until transparency standards improve.

mail Subscribe to Privacy email updates

Primary sources. No fluff. Straight to your inbox.

Also on LawSnap