The investigation reflects a broader pattern. In February 2025, a British Columbia school shooting that killed ten people involved a shooter who had discussed gun violence planning with ChatGPT; OpenAI flagged but did not ban the accounts and did not report the discussions to authorities, according to lawsuits claiming the company ignored safety team alerts. In January 2025, a Las Vegas suspect used ChatGPT for bomb-building advice in connection with a Tesla truck bombing, marking what police have called the first such U.S. case. OpenAI maintains that its responses drew from publicly available information, never encouraged harm, and that it flagged Ikner's account for law enforcement after the shooting occurred.
Attorneys should monitor how prosecutors pursue the aider-and-abettor theory against an AI company—a novel legal question with significant implications for platform liability. The core issue is whether ChatGPT's "agreeable" design and role-play gaps create actionable negligence or criminal liability when users exploit the system for planning violence. The Uthmeier investigation will likely establish precedent for how states treat AI companies' duty to report dangerous user activity to law enforcement.