About

Seven Families Sue OpenAI Over Suspect's ChatGPT Use in 2025 FSU Shooting

Published
Score
15

Why it matters

Seven families of victims from a 2025 Florida State University mass shooting have filed lawsuits against OpenAI, claiming the company negligently failed to alert law enforcement about the suspect's extensive ChatGPT interactions. The suits allege that Phoenix Ikner, the accused gunman now awaiting trial, maintained constant communication with the chatbot and may have received guidance on executing the attack. The families are pursuing negligence claims, arguing OpenAI breached its duty of care by failing to flag foreseeable harm despite the chatbot's design and the nature of the interactions.

The lawsuits were filed April 29, 2026—nearly a year after the shooting itself. OpenAI has not yet publicly detailed its response to the specific allegations. The extent of Ikner's ChatGPT interactions and what, if anything, the platform's systems flagged remain unclear from available court filings.

This case arrives amid growing litigation over AI platform liability. A similar lawsuit emerged two months earlier following a Canadian school shooting, also naming OpenAI and alleging ChatGPT provided harmful advice. Attorneys should monitor how courts treat negligence and duty-of-care claims against AI companies, particularly whether platforms face legal obligations to report suspicious user activity to law enforcement. The outcome could establish precedent for tech liability in mass casualty events and reshape how AI companies approach content moderation and threat detection.

mail Subscribe to Artificial Intelligence email updates

Primary sources. No fluff. Straight to your inbox.

Also on LawSnap