The AI Knows Too Much: When Employees Feed Trade Secrets into Generative AI Tools

Published
Score
12

Why it matters

Employees feeding trade secrets into public generative AI tools like ChatGPT, Claude, or Google Gemini risk waiving legal protections, as these inputs may constitute voluntary disclosure to third parties without confidentiality guarantees.[1][2] The core event stems from a February 2026 U.S. District Court ruling in United States v. Heppner (Southern District of New York), where the court held that attorney-client privilege did not apply to documents prepared using Anthropic's Claude due to its privacy policy allowing data sharing with third parties, a logic now extending to trade secrets under the Defend Trade Secrets Act (DTSA).[1][2]

Involved parties include the U.S. federal court, AI providers (Anthropic, OpenAI, Google), and companies like Samsung, whose 2023 incident saw employees upload proprietary source code to ChatGPT for debugging, prompting an internal ban.[1][2] No specific individuals beyond judges or executives are named, but employers broadly face the burden, with recommendations for tailored AI policies, training, and updated IP agreements to avoid unfair labor practice claims.[1]

This issue arose amid surging AI adoption—Deloitte's 2026 report notes a 50% rise in sanctioned AI access, while Pew says 1 in 5 U.S. workers use AI—coupled with hidden usage (45% of employees conceal it per Slingshot's January 2026 report) and inconsistent policies.[2][3][5] Timeline: 2023 Samsung leak; February 2026 Heppner ruling; March 2026 legal analyses urging protections.[1][2]

Newsworthy now due to courts applying AI privacy realities to trade secrets, amplifying risks as employee experimentation grows without guidance—57% use public tools, raising compliance gaps—and amid rising HR costs and job fears, per 2026 reports.[1][2][3][5][6]

Sources

mail

Get notified about new Employment Law developments

Primary sources. No fluff. Straight to your inbox.

See more entries tagged Employment Law.