The three states diverge significantly in their regulatory approaches. All require mandatory disclosures that users are interacting with AI, crisis detection for self-harm and suicide content, and safeguards against sexually explicit material involving minors. Washington imposes the strictest design requirements, prohibiting engagement tactics like excessive praise or isolation promotion. Oregon uniquely creates a private right of action with $1,000 statutory damages per violation and includes healthcare carve-outs. California emphasizes reporting to its Office of Suicide Prevention. Oregon's definition of regulated AI is narrower and behavior-based, while Washington's is broader.
The cascade of state laws reflects mounting concern that AI chatbots mislead users—particularly minors—about emotional support and expose them to psychological harm. Attorneys representing AI developers and platforms should expect rapid compliance demands across multiple jurisdictions with conflicting standards. The private right of action in Oregon creates particular litigation exposure. Companies operating nationally will face a patchwork of obligations, and the speed of enactment suggests additional states may follow.