A Stanford study cited in the guidance itself found that leading legal research companies' generative AI systems hallucinate between 17 and 33 percent of the time. Critics argue this finding undermines the opinion's central premise: that a lawyer's prior experience with a tool justifies reduced scrutiny. The logical tension deepens given the opinion's acknowledgment that AI technology is "rapidly changing," making past familiarity an unreliable predictor of current performance. The guidance does not address how experience-based shortcuts apply to evolving systems.
Attorneys should treat this guidance as permissive floor, not ceiling. The opinion arrives amid documented sanctions cases involving AI-generated fake citations, including instances cited by Chief Justice John Roberts in his 2023 Annual Report. The disconnect between the ABA's stated hallucination risks and its recommended verification standards suggests that ethics opinions alone will not prevent malpractice. Firms relying on this guidance should implement independent governance infrastructure—systematic verification protocols, audit trails, and output review procedures—rather than depending on individual attorney judgment about when verification can be reduced.