Key players include AI developers like OpenAI, Google (Bard), Anthropic, and Claude; victims such as Air Canada (forced refund for fabricated discount policy, 2024), Deloitte (2025 government contract refund for fake citations), and law firms sanctioned for ChatGPT-generated fictitious case law (late 2025). U.S. attorneys general issued December 2025 warnings demanding audits for "delusional outputs," while agencies like tribunals enforced liabilities; no specific individuals or new legislation named in the post[2][4][13].
Context stems from LLMs' inherent design—sparse training data, lack of reality grounding, and pressure to respond—leading to factual errors, fabricated citations, or impossible scenarios since early demos in 1995, escalating post-2023 with ChatGPT[1][5][9]. Timeline: Bard's 2023 telescope error cost Alphabet $100B in market value; Air Canada/Deloitte cases (2024-2025); AG warnings and sanctions (late 2025); persistence noted in 2026 student studies (94% see accuracy variance)[2][7].
Newsworthy now amid 2026's regulatory scrutiny and stalled progress—despite "reasoning models," hallucinations endure as compliance risks, with OpenAI's restructuring and joint safety evals underscoring unresolved liabilities for businesses and lawsuits. Incidents like 2025 ChatGPT-linked fraud claims amplify calls for mitigations like human oversight and "Safe Completions" training, positioning AI as a high-stakes product[2][6][9].