The Backlash Against AI Devices That Are Always Watching

Published
Score
10

Why it matters

Core event: Mounting privacy backlash against always-on AI devices and data practices, highlighted by a U.S. class-action lawsuit against Meta over its AI smart glasses, where Kenya-based workers reviewed user footage including nudity, sex, and toilet use, contradicting privacy claims.[2] This joins EU probes into Meta's AI training data scraping from public Facebook/Instagram posts via short opt-out windows, covert Pixel tracking, and noyb cease-and-desist actions under GDPR.[1]

Key players: Meta (facing lawsuits with plaintiffs Gina Bartone and Mateo Canu via Clarkson Law Firm, partner Luxottica; prior EU scrutiny by DPC and noyb); Anthropic (pressured by Pentagon/Defense Secretary Pete Hegseth via Defense Production Act threat for "all lawful use" of Claude models, amid surveillance concerns); regulators (EU DPC, UK ICO, FTC); advocacy (noyb, EFF, labor unions suing over AI social media surveillance).[1][2][3]

Context and timeline: Meta's April 2025 AI training announcement (public May 14) offered brief opt-out (closed May 27), sparking noyb letters and DPC safeguards; data flowed post-deadline with minor filters.[1] Smart glasses issues escalated March 5, 2026, via Swedish reports and U.S. suit, after 7M+ units sold in 2025 with non-opt-out data pipelines.[2] Anthropic's clash ties to Trump admin's AI surveillance (e.g., visa monitoring), echoing Claude misuse in data theft.[3] Meta plans to end Instagram E2EE DMs by May 8, 2026, citing low use amid law enforcement criticism.[4]

Newsworthy now: Headline on March 14, 2026, captures fresh escalation—Meta suit days prior, Anthropic ultimatum, amid 2025-2026 regulatory momentum, sales scale (7M glasses), and policy shifts like E2EE cuts, signaling AI's privacy redefinition as devices proliferate.[1][2][3][4]

mail

Get notified about new Employment Law developments

Primary sources. No fluff. Straight to your inbox.

See more entries tagged Employment Law.