The U.S. Attorney's Office for the Eastern District of North Carolina removed Renfer from the case and referred the matter to the Justice Department's Office of Professional Responsibility. U.S. Attorney Ellis Boyle subsequently issued a warning to staff about AI use. The specific details of what citations were fabricated and which cases were misrepresented remain unclear.
The incident reflects a broader pattern of AI-generated hallucinations in legal work. Over 800 similar cases have been tracked worldwide, most notably the 2023 episode involving New York lawyers who relied on ChatGPT to draft a brief containing entirely fictional cases. Recent research shows AI legal tools hallucinate on between 16 and 82 percent of queries depending on the tool and task. Attorneys should assume that AI-drafted briefs require the same level of citation verification as human work—and likely more. The incident also follows a February 2026 ruling by Judge Jed Rakoff in the Southern District of New York denying attorney-client privilege to documents generated via Claude AI, a decision that exposes privileged communications to discovery when AI is involved in their creation.