EDRM Advocates Embedded AI Safeguards in Legal Tools for Competence Under Pressure
The Electronic Discovery Reference Model published guidance this week arguing that legal competence with artificial intelligence depends on systemic safeguards built into tools themselves, not training alone. The article, "From Training to Execution: Embedded Safeguards for Responsible AI Use in Legal Practice," contends that safeguards must function reliably during high-pressure scenarios where human oversight falters. Rose Hunter Jones of Hilgers, PLLC has documented a playbook for AI use in eDiscovery and litigation that exemplifies this approach. Thomson Reuters is developing what it calls "fiduciary-grade" AI with built-in accountability mechanisms. The American Bar Association's Formal Opinion 512, issued in July 2024, requires technological competence under Model Rule 1.1, explicitly extending that duty to AI-specific risks including bias and hallucinations.