Fast Company guide details secure PDF redaction for AI chatbots

Published
Score
10

Why it matters

Fast Company published a practical guide on April 18, 2026, on properly redacting sensitive information from PDFs before uploading them to ChatGPT and other AI chatbots. The article emphasizes using tools that permanently delete underlying text—such as Apple's Preview app—rather than ineffective markup methods like highlighting. A critical caveat: logged-in accounts still link all uploads to user identities, creating a privacy trail even when documents appear redacted.

The guide names specific tools including Apple Preview, Adobe Acrobat Pro, and free alternatives like PDFgear for Windows users. It references past failures by state attorneys general and private attorneys whose improper PDF redactions exposed sensitive data in cases involving the Manafort-Mueller documents and Epstein-related files. OpenAI's default settings allow chat data to be used for model training unless users manually opt out through account preferences.

Attorneys handling sensitive materials—bank statements, medical records, litigation documents—should treat AI uploads as a data retention risk. The timing reflects a broader gap: widespread adoption of AI for confidential work is outpacing both user awareness of privacy mechanics and federal privacy regulation. The practical takeaway is straightforward: permanent deletion tools matter, account settings matter more, and the safest approach remains not uploading sensitive originals at all.

mail

Get notified about new Privacy developments

Primary sources. No fluff. Straight to your inbox.

See more entries tagged Privacy.

Also on LawSnap