The new framework uses tiered risk management. Low-stakes administrative tasks like intake routing and document organization can operate with full autonomy, while high-judgment work carrying malpractice liability remains under strict human control. Regulatory frameworks including the EU AI Act and NIST AI Risk Management Framework increasingly mandate this type of human oversight for high-risk autonomous systems. Significant governance gaps remain, particularly around data access sprawl, training data provenance, and permission accumulation across cloud and on-premises infrastructure.
Attorneys should expect this governance model to become standard practice. The shift reflects enterprise-wide challenges across legal, healthcare, and regulatory sectors. Firms implementing agentic AI now face pressure to align security, compliance, and human accountability frameworks before deployment. Those still operating under reactive review models should begin mapping which tasks genuinely require human judgment and which can safely operate autonomously—and establish controls accordingly.