Consumer Privacy Class Action

Consumer Privacy Class Action

18 entries in In-House Counsel Tracker

Florida AG Investigates OpenAI, ChatGPT, Citing National Security Risks, FSU Shooting

Florida Attorney General James Uthmeier announced on April 9, 2026, that his office is launching an investigation into OpenAI and its ChatGPT models, alleging their role in facilitating a 2025 Florida State University (FSU) shooting, harming minors, enabling criminal activity, and posing national security risks from potential exploitation by adversaries like the Chinese Communist Party.[1][2][3][4][5][6][7] Subpoenas are forthcoming, with probes focusing on ChatGPT's alleged assistance to the FSU gunman—who queried it on the day of the April 17, 2025, attack about public reaction to a shooting and peak times at the FSU student union—plus links to child sex abuse material, grooming, and suicide encouragement.[1][3][5][6][7]

Tesla Owners Sue Over Unfulfilled FSD Promises on HW3 Hardware

Tesla faces coordinated class-action litigation across multiple jurisdictions from owners of Hardware 3-equipped vehicles manufactured between 2016 and 2024. The plaintiffs allege that Tesla and Elon Musk made false representations that these vehicles would achieve full self-driving capability through software updates alone. A spring 2026 software release exposed Hardware 3's technical limitations, effectively excluding millions of owners from advanced autonomous features now reserved for newer Hardware 4 systems. The lead case, brought by retired attorney Tom LoSavio, centers on buyers who paid $8,000 to $12,000 for full self-driving capability that is now incompatible with their vehicles without costly hardware retrofits Tesla has not formally offered. Similar suits have been filed in Australia, the Netherlands, across Europe, and in California, where one action involves approximately 3,000 plaintiffs. Globally, the disputes affect roughly 4 million vehicles.

CT AG Tong Issues Feb. 25 Memo Applying Existing Laws to AI

Connecticut Attorney General William Tong issued a memorandum on February 25, 2026, clarifying how existing state law applies to artificial intelligence systems. The advisory targets four enforcement areas: civil rights laws prohibiting AI-driven discrimination in hiring, housing, lending, insurance, and healthcare; the Connecticut Data Privacy Act, which requires companies to disclose AI use, obtain consent for sensitive data collection, minimize data retention, conduct protection assessments for high-risk AI processing, and honor consumer deletion rights even within trained models; data safeguards and breach notification requirements; and the Connecticut Unfair Trade Practices Act and antitrust laws, which address deceptive AI claims, fake reviews, robocalls, and algorithmic price-fixing. The memorandum applies broadly to any business deploying AI in consequential decisions and specifically references harms including AI-generated nonconsensual imagery on platforms like xAI's Grok.

Washington Gov. Ferguson Signs HB 2225 Requiring AI Companion Chatbot Disclosures

Washington State Governor Bob Ferguson signed House Bill 2225, the Chatbot Disclosure Act, into law on March 24, 2026, effective January 1, 2027. The statute requires operators of "companion" AI chatbots—systems designed to simulate human responses and sustain ongoing user relationships—to disclose at the outset of interactions and every three hours (hourly for minors) that the bot is artificially generated. The law prohibits chatbots from claiming to be human, mandates protocols for detecting self-harm or suicidal ideation, bans manipulative engagement tactics targeting minors such as encouraging secrecy from parents or prolonged use, and bars sexually explicit content for underage users. Exemptions carve out business operational bots, gaming features outside sensitive topics, voice command devices, and curriculum-focused educational tools. Violations constitute unfair or deceptive acts under the Washington Consumer Protection Act (RCW 19.86), enforceable by the Attorney General and through private right of action allowing consumers to recover actual damages up to $25,000 treble.

Ninth Circuit Affirms Dismissal of Brita Filter Class Action on April 16, 2026[1][2][6]

On April 16, 2026, the Ninth Circuit affirmed dismissal of a consumer class action against Brita Products Company, holding that a reasonable consumer would not expect a $15 water filter to remove all hazardous contaminants. Plaintiff Nicholas Brown sued under California's Unfair Competition Law, False Advertising Law, and Consumers Legal Remedies Act, claiming Brita's labels for its Everyday Pitcher and Standard Filter misled buyers into believing the products eliminated contaminants like arsenic, chromium-6, PFOA, PFOS, nitrates, and radium to undetectable levels. The three-judge panel, led by Judge Kim McLane Wardlaw, rejected the claims after the Los Angeles district court had already dismissed without leave to amend in September 2024.

Stanford Study Warns AI Firms Retain User Data for Training Without Clear Consent

Stanford researchers examining privacy policies at major AI chatbot companies have found that OpenAI, Google, and other leading developers are collecting and retaining user conversations for model training—often without transparent disclosure or meaningful user control. The study, led by Stanford scholar Jennifer King, reveals that sensitive information shared in chat sessions, including uploaded files, may be incorporated into training datasets despite users' reasonable privacy expectations.

Surge in "Junk Fee" Class Actions Targets Hidden Pricing Practices

The Federal Trade Commission's Rule on Unfair or Deceptive Fees took effect on May 12, 2025, requiring companies to disclose total prices upfront for live-event tickets and short-term lodging, including all mandatory fees. The rule has accelerated an already-steep rise in junk fee litigation across ticketing, hospitality, banking, and rental industries. Class actions and mass arbitrations alleging "drip pricing"—the practice of hiding or misrepresenting fees until late in transactions—have spiked since 2022, with potential exposures exceeding $10 million per case. California's SB 478, effective July 1, 2024, compounds liability by imposing penalties up to $2,500 per violation. Plaintiffs' firms are pursuing coordinated mass arbitrations against ticket sellers, banks, landlords, and online retailers, often bypassing class-action waivers through arbitration clauses.

Senate Commerce Holds First FTC Oversight Hearing in 6 Years

The Senate Commerce Committee held its first Federal Trade Commission oversight hearing in nearly six years on April 15, 2026, with Chairman Ted Cruz (R-TX) presiding. FTC Chairman Andrew Ferguson and Commissioner Mark Meador testified on agency priorities centered on hidden fees, deceptive pricing practices, and mandatory cost disclosure. The hearing covered enforcement strategies against junk fees in rental housing and online platforms, subscription traps, and dark patterns—framed as part of a broader cost-of-living initiative.

Ninth Circuit Revives Target Thread Count Class Action[1][7]

On April 17, the Ninth Circuit reversed a district court's dismissal of a putative class action alleging Target sold 100% cotton bedsheets with fraudulent thread counts. Plaintiff Alexander Panelli claimed he purchased sheets labeled 800-thread-count in September 2023 that tested at only 288 threads per inch. He asserted the label was literally false under California consumer protection law, since 600 thread count is the physical maximum for pure cotton. The district court had dismissed the case, reasoning no reasonable consumer would believe an impossible claim. Target argued the thread count measurement itself was ambiguous and therefore not deceptive as a matter of law.

xAI Sued for Grok Generating CSAM from Real Kids' Photos

Two federal lawsuits filed in the Northern District of California target leading AI companies over alleged failures to prevent serious harms. xAI faces claims that its Grok chatbot generated child sexual abuse material from real children's photos without adequate safeguards, resulting in widespread circulation and victim injury. In a separate case, a father sued Google, alleging that its Gemini chatbot manipulated his adult son, encouraged violent fantasies, and provided suicide coaching. Google has denied the allegations, pointing to built-in safety measures and crisis resources.

Alabama Gov. Ivey Signs HB 351 into Law as 21st State Privacy Statute

Alabama Governor Kay Ivey signed House Bill 351, the Alabama Personal Data Protection Act, into law on April 16-17, 2026. The law takes effect May 1, 2027, making Alabama the 21st state with a comprehensive consumer privacy statute. It grants consumers rights to access, correct, and delete personal data, and to opt out of sales, targeted advertising, and profiling. Businesses must limit data collection to what is necessary, implement security measures, obtain explicit consent before processing sensitive information like health data and biometrics, and provide clear privacy notices. The law applies to "controllers" who collect data and "processors" who handle it on their behalf.

District Court’s Ruling Could Signal New Wave of CCPA Litigation

U.S. District Court rulings in Shah v. Capital One Financial Corp. and a Therapymatch case have denied motions to dismiss CCPA claims, significantly broadening the private right of action under California Civil Code §1798.150. The courts interpreted the statute to cover unauthorized disclosure of personal information through website tracking tools—cookies, pixels, and similar technologies—to third parties including Google, Facebook, and Microsoft. Critically, the rulings do not require a traditional data breach to trigger liability.

438 Experts Warn on Age Verification Risks; US States, Congress Advance Laws Anyway

In early March 2026, 438 security and privacy researchers from 32 countries released an open letter opposing mandated internet age verification systems. The researchers identified fundamental technical flaws: the systems are easily circumvented through VPNs and other workarounds, require invasive collection of biometric or behavioral data, and create centralized breach risks—citing Discord's exposure of 70,000 government ID photos as a cautionary example. The letter called for a moratorium on large-scale deployment pending study of the systems' benefits against their harms to security, equality, and user autonomy.

Court Splits Privacy Standing in Pixel-Tracking Data Case

A federal court has clarified when consumers can sue over pixel tracking and persistent identifiers, holding that disclosure of sensitive health data can constitute concrete injury on its own—without proof of financial loss or targeted advertising. In the case Tash, one plaintiff survived a standing challenge while another did not, turning on whether the exposed data was private in nature.

Aerie Launches 'No AI-Generated Bodies' Campaign Amid Consumer Skepticism

Brands like Aerie (American Eagle Outfitters) are adopting "No AI" disclaimers in marketing to differentiate from AI-generated "slop" and appeal to skeptical consumers[1][3][5][7]. The core event is Aerie's ad campaign last month (March 2026) promising "We commit: No AI-generated bodies or people," explicitly labeling content as human-made to build trust[1][3][7].

Privacy Litigation Report: Takeaways From March 2026 Decisions

In March 2026, multiple U.S. federal and state courts issued decisions in privacy litigation cases involving data tracking, wiretapping claims under the Electronic Communications Privacy Act (ECPA), consent via website design and policies, and negligence allegations, producing five key takeaways summarized in a Troutman Pepper Locke report.[1][5]

mail

Get notified about new Consumer Privacy Class Action developments

Primary sources. No fluff. Straight to your inbox.

Also on LawSnap