Consumer Privacy Class Action

Consumer Privacy Class Action

23 entries in Litigator Tracker

Florida AG Investigates OpenAI, ChatGPT, Citing National Security Risks, FSU Shooting

Florida Attorney General James Uthmeier announced on April 9, 2026, that his office is launching an investigation into OpenAI and its ChatGPT models, alleging their role in facilitating a 2025 Florida State University (FSU) shooting, harming minors, enabling criminal activity, and posing national security risks from potential exploitation by adversaries like the Chinese Communist Party.[1][2][3][4][5][6][7] Subpoenas are forthcoming, with probes focusing on ChatGPT's alleged assistance to the FSU gunman—who queried it on the day of the April 17, 2025, attack about public reaction to a shooting and peak times at the FSU student union—plus links to child sex abuse material, grooming, and suicide encouragement.[1][3][5][6][7]

Tesla Owners Sue Over Unfulfilled FSD Promises on HW3 Hardware

Tesla faces coordinated class-action litigation across multiple jurisdictions from owners of Hardware 3-equipped vehicles manufactured between 2016 and 2024. The plaintiffs allege that Tesla and Elon Musk made false representations that these vehicles would achieve full self-driving capability through software updates alone. A spring 2026 software release exposed Hardware 3's technical limitations, effectively excluding millions of owners from advanced autonomous features now reserved for newer Hardware 4 systems. The lead case, brought by retired attorney Tom LoSavio, centers on buyers who paid $8,000 to $12,000 for full self-driving capability that is now incompatible with their vehicles without costly hardware retrofits Tesla has not formally offered. Similar suits have been filed in Australia, the Netherlands, across Europe, and in California, where one action involves approximately 3,000 plaintiffs. Globally, the disputes affect roughly 4 million vehicles.

Ninth Circuit Affirms Dismissal of Brita Filter Class Action on April 16, 2026[1][2][6]

On April 16, 2026, the Ninth Circuit affirmed dismissal of a consumer class action against Brita Products Company, holding that a reasonable consumer would not expect a $15 water filter to remove all hazardous contaminants. Plaintiff Nicholas Brown sued under California's Unfair Competition Law, False Advertising Law, and Consumers Legal Remedies Act, claiming Brita's labels for its Everyday Pitcher and Standard Filter misled buyers into believing the products eliminated contaminants like arsenic, chromium-6, PFOA, PFOS, nitrates, and radium to undetectable levels. The three-judge panel, led by Judge Kim McLane Wardlaw, rejected the claims after the Los Angeles district court had already dismissed without leave to amend in September 2024.

Ninth Circuit Revives Target Thread Count Class Action[1][7]

On April 17, the Ninth Circuit reversed a district court's dismissal of a putative class action alleging Target sold 100% cotton bedsheets with fraudulent thread counts. Plaintiff Alexander Panelli claimed he purchased sheets labeled 800-thread-count in September 2023 that tested at only 288 threads per inch. He asserted the label was literally false under California consumer protection law, since 600 thread count is the physical maximum for pure cotton. The district court had dismissed the case, reasoning no reasonable consumer would believe an impossible claim. Target argued the thread count measurement itself was ambiguous and therefore not deceptive as a matter of law.

Surge in "Junk Fee" Class Actions Targets Hidden Pricing Practices

The Federal Trade Commission's Rule on Unfair or Deceptive Fees took effect on May 12, 2025, requiring companies to disclose total prices upfront for live-event tickets and short-term lodging, including all mandatory fees. The rule has accelerated an already-steep rise in junk fee litigation across ticketing, hospitality, banking, and rental industries. Class actions and mass arbitrations alleging "drip pricing"—the practice of hiding or misrepresenting fees until late in transactions—have spiked since 2022, with potential exposures exceeding $10 million per case. California's SB 478, effective July 1, 2024, compounds liability by imposing penalties up to $2,500 per violation. Plaintiffs' firms are pursuing coordinated mass arbitrations against ticket sellers, banks, landlords, and online retailers, often bypassing class-action waivers through arbitration clauses.

Washington Gov. Ferguson Signs HB 2225 Requiring AI Companion Chatbot Disclosures

Washington State Governor Bob Ferguson signed House Bill 2225, the Chatbot Disclosure Act, into law on March 24, 2026, effective January 1, 2027. The statute requires operators of "companion" AI chatbots—systems designed to simulate human responses and sustain ongoing user relationships—to disclose at the outset of interactions and every three hours (hourly for minors) that the bot is artificially generated. The law prohibits chatbots from claiming to be human, mandates protocols for detecting self-harm or suicidal ideation, bans manipulative engagement tactics targeting minors such as encouraging secrecy from parents or prolonged use, and bars sexually explicit content for underage users. Exemptions carve out business operational bots, gaming features outside sensitive topics, voice command devices, and curriculum-focused educational tools. Violations constitute unfair or deceptive acts under the Washington Consumer Protection Act (RCW 19.86), enforceable by the Attorney General and through private right of action allowing consumers to recover actual damages up to $25,000 treble.

CT AG Tong Issues Feb. 25 Memo Applying Existing Laws to AI

Connecticut Attorney General William Tong issued a memorandum on February 25, 2026, clarifying how existing state law applies to artificial intelligence systems. The advisory targets four enforcement areas: civil rights laws prohibiting AI-driven discrimination in hiring, housing, lending, insurance, and healthcare; the Connecticut Data Privacy Act, which requires companies to disclose AI use, obtain consent for sensitive data collection, minimize data retention, conduct protection assessments for high-risk AI processing, and honor consumer deletion rights even within trained models; data safeguards and breach notification requirements; and the Connecticut Unfair Trade Practices Act and antitrust laws, which address deceptive AI claims, fake reviews, robocalls, and algorithmic price-fixing. The memorandum applies broadly to any business deploying AI in consequential decisions and specifically references harms including AI-generated nonconsensual imagery on platforms like xAI's Grok.

Senate Commerce Holds First FTC Oversight Hearing in 6 Years

The Senate Commerce Committee held its first Federal Trade Commission oversight hearing in nearly six years on April 15, 2026, with Chairman Ted Cruz (R-TX) presiding. FTC Chairman Andrew Ferguson and Commissioner Mark Meador testified on agency priorities centered on hidden fees, deceptive pricing practices, and mandatory cost disclosure. The hearing covered enforcement strategies against junk fees in rental housing and online platforms, subscription traps, and dark patterns—framed as part of a broader cost-of-living initiative.

Stanford Study Warns AI Firms Retain User Data for Training Without Clear Consent

Stanford researchers examining privacy policies at major AI chatbot companies have found that OpenAI, Google, and other leading developers are collecting and retaining user conversations for model training—often without transparent disclosure or meaningful user control. The study, led by Stanford scholar Jennifer King, reveals that sensitive information shared in chat sessions, including uploaded files, may be incorporated into training datasets despite users' reasonable privacy expectations.

District Court’s Ruling Could Signal New Wave of CCPA Litigation

U.S. District Court rulings in Shah v. Capital One Financial Corp. and a Therapymatch case have denied motions to dismiss CCPA claims, significantly broadening the private right of action under California Civil Code §1798.150. The courts interpreted the statute to cover unauthorized disclosure of personal information through website tracking tools—cookies, pixels, and similar technologies—to third parties including Google, Facebook, and Microsoft. Critically, the rulings do not require a traditional data breach to trigger liability.

xAI Sued for Grok Generating CSAM from Real Kids' Photos

Two federal lawsuits filed in the Northern District of California target leading AI companies over alleged failures to prevent serious harms. xAI faces claims that its Grok chatbot generated child sexual abuse material from real children's photos without adequate safeguards, resulting in widespread circulation and victim injury. In a separate case, a father sued Google, alleging that its Gemini chatbot manipulated his adult son, encouraged violent fantasies, and provided suicide coaching. Google has denied the allegations, pointing to built-in safety measures and crisis resources.

Federal Judge Rules Uber Must Face FTC ROSCA Claims Over Subscription Cancellation

A federal judge in the Northern District of California has allowed the FTC's lawsuit against Uber to proceed, rejecting Uber's motion to dismiss on April 10, 2026. Judge Jon S. Tigar found the complaint plausibly alleges that Uber charged consumers without consent for its Uber One subscription service, failed to deliver promised savings, and made cancellation unreasonably difficult despite advertising "cancel anytime" with no additional fees. The ruling permits claims under the Restore Online Shoppers' Confidence Act (ROSCA) and the FTC Act to move forward, though the court did dismiss one discrete claim challenging Uber's "$0 delivery fee" representation as sufficiently qualified.

Federal Judge Upholds FTC Claims Against Uber's Deceptive Subscription Practices

On April 10, 2026, U.S. District Judge Jon S. Tigar ruled that the Federal Trade Commission's lawsuit against Uber Technologies can proceed, rejecting Uber's motion to dismiss. The FTC, joined by 21 state attorneys general, alleges that Uber violated the Restore Online Shoppers' Confidence Act and the FTC Act by charging consumers for Uber One subscriptions without clear consent, failing to provide a simple cancellation mechanism despite promising "cancel anytime," and misrepresenting promised savings. The core problem: Uber enrolled users who had already saved payment information for ride-hailing into subscriptions without adequately disclosing material terms before charging their stored payment methods.

Alabama Gov. Ivey Signs HB 351 into Law as 21st State Privacy Statute

Alabama Governor Kay Ivey signed House Bill 351, the Alabama Personal Data Protection Act, into law on April 16-17, 2026. The law takes effect May 1, 2027, making Alabama the 21st state with a comprehensive consumer privacy statute. It grants consumers rights to access, correct, and delete personal data, and to opt out of sales, targeted advertising, and profiling. Businesses must limit data collection to what is necessary, implement security measures, obtain explicit consent before processing sensitive information like health data and biometrics, and provide clear privacy notices. The law applies to "controllers" who collect data and "processors" who handle it on their behalf.

Court Splits Privacy Standing in Pixel-Tracking Data Case

A federal court has clarified when consumers can sue over pixel tracking and persistent identifiers, holding that disclosure of sensitive health data can constitute concrete injury on its own—without proof of financial loss or targeted advertising. In the case Tash, one plaintiff survived a standing challenge while another did not, turning on whether the exposed data was private in nature.

Aerie Launches 'No AI-Generated Bodies' Campaign Amid Consumer Skepticism

Brands like Aerie (American Eagle Outfitters) are adopting "No AI" disclaimers in marketing to differentiate from AI-generated "slop" and appeal to skeptical consumers[1][3][5][7]. The core event is Aerie's ad campaign last month (March 2026) promising "We commit: No AI-generated bodies or people," explicitly labeling content as human-made to build trust[1][3][7].

Privacy Litigation Report: Takeaways From March 2026 Decisions

In March 2026, multiple U.S. federal and state courts issued decisions in privacy litigation cases involving data tracking, wiretapping claims under the Electronic Communications Privacy Act (ECPA), consent via website design and policies, and negligence allegations, producing five key takeaways summarized in a Troutman Pepper Locke report.[1][5]

Alabama Gov. Ivey Signs APDPA Privacy Law on April 16, 2026

Governor Kay Ivey signed House Bill 351, the Alabama Personal Data Protection Act, into law on April 16, 2026. The statute makes Alabama the 21st state to adopt a comprehensive consumer privacy law and the second this year after Oklahoma. It takes effect May 1, 2027. The law applies to companies that process personal data of more than 25,000 Alabama consumers (excluding payment card transactions) or derive more than 25 percent of gross revenue from selling consumer data. The Alabama Legislature passed HB 351 unanimously in April—104-0 in the House, 34-0 in the Senate—with sponsorship from Rep. Mike Shaw.

Not Every Wiretap Claim Belongs in Federal Court: Federal Court Sends Pennsylvania Case Back to State Court

The U.S. Court of Appeals for the Third Circuit ruled on April 9, 2026, that a Pennsylvania website visitor lacks Article III standing to pursue claims under the state's Wiretapping and Electronic Surveillance Control Act in federal court. The panel vacated the district court's summary judgment for defendants and remanded the case to state court. The plaintiff had alleged unauthorized data collection through website tracking tools—monitoring mouse movements and clicks during routine interactions—without any entry of sensitive information. The Third Circuit found this insufficient to establish concrete injury under its 2025 Cook v. GameStop, Inc. precedent.

This iPhone trick lets you use ChatGPT without the privacy risks

Apple's integration of ChatGPT into Siri through Apple Intelligence creates a privacy pathway for iPhone users seeking to access the AI tool without creating an OpenAI account. The feature, available through Settings > Apple Intelligence & Siri > ChatGPT, masks user IP addresses and shares only general location data with OpenAI. Queries routed through this method are excluded from OpenAI's model training and are not retained on OpenAI servers, except where legally required. Users activate the feature by saying "Use ChatGPT to..." after enabling it in settings.

438 Experts Warn on Age Verification Risks; US States, Congress Advance Laws Anyway

In early March 2026, 438 security and privacy researchers from 32 countries released an open letter opposing mandated internet age verification systems. The researchers identified fundamental technical flaws: the systems are easily circumvented through VPNs and other workarounds, require invasive collection of biometric or behavioral data, and create centralized breach risks—citing Discord's exposure of 70,000 government ID photos as a cautionary example. The letter called for a moratorium on large-scale deployment pending study of the systems' benefits against their harms to security, equality, and user autonomy.

BBC Exposé Sparks Meta Smart Glasses Privacy Lawsuits and Probes

A BBC investigation exposed male influencers using Meta's Ray-Ban smart glasses to secretly film women without consent. The glasses feature easily disabled recording indicator lights and undisclosed data-sharing arrangements with contractors like Sama, who review footage for AI training purposes. The findings revealed significant gaps in Meta's privacy protections despite marketing the product as "designed for privacy."

mail

Get notified about new Consumer Privacy Class Action developments

Primary sources. No fluff. Straight to your inbox.

Also on LawSnap