Washington Governor Bob Ferguson signed House Bill 2225 on March 24, 2026, establishing the first comprehensive state regulation of AI companion chatbots—AI systems designed to simulate emotional relationships and sustain ongoing conversations with users.[1] The law will take effect on January 1, 2027, and applies to operators (companies or entities) that make these systems available to Washington residents.[1][2]
Core Requirements
The law mandates that operators clearly disclose at the start of interactions that users are communicating with AI, not humans, with reminders every three hours for adults and every hour for minors.[1][3] For users under 18, additional protections include prohibitions on sexually explicit content and "manipulative engagement techniques"—defined as tactics that simulate romantic relationships, pressure users to make in-app purchases, or guilt minors into continuing conversations.[2][3] Companies must also implement safeguards to detect expressions of self-harm or suicidal ideation and direct users to crisis resources.[8][10]
Who and Why It Matters
The law primarily affects AI companies like OpenAI and Anthropic that operate consumer-facing chatbot platforms.[3] It includes a private right of action—allowing users to sue directly for violations, a provision rare in privacy legislation and modeled on Washington's My Health My Data Act.[2] The law excludes customer service bots, gaming chatbots, educational tools, and general virtual assistants that don't sustain emotional relationships.[1][4] This represents part of a broader wave of state AI regulation in 2026; Oregon passed similar legislation, and at least 13 other states have active chatbot safety bills.[9]