Above the Law Warns Lawyers on ChatGPT Confidentiality Risks

Published
Score
12

Why it matters

Above the Law published an advisory on April 20, 2026, warning attorneys against using public generative AI tools like ChatGPT for client work, citing confidentiality breaches and violations of ABA Model Rule 1.6(c). The piece argues that privacy toggles and similar safeguards do not adequately prevent unauthorized disclosure of sensitive information, and that inputting client data into these systems—even with protective measures enabled—fails to meet the ethical standard for preventing unintended access.

The advisory does not identify specific incidents of breach or name particular firms affected. It references the 2023 sanctions against New York lawyers who relied on ChatGPT to generate fictitious case citations, and notes ongoing concerns about data training practices, hallucinations, and potential privilege waiver. The scope of the warning extends to any use of public AI platforms for substantive legal work involving client information.

The piece recommends practices including the use of hypotheticals, removal of identifying details, and application of the "New York Times test"—asking whether a prompt would be acceptable if published. The timing reflects accelerating adoption of AI tools by law firms seeking efficiency gains, coupled with ABA Formal Opinion 512 (July 2024), which reaffirmed duties of competence, supervision, and confidentiality. Attorneys should treat this as a reminder that operational convenience does not override confidentiality obligations, and should audit current AI use policies accordingly.

mail

Get notified about new Privacy developments

Primary sources. No fluff. Straight to your inbox.

See more entries tagged Privacy.

Also on LawSnap