About

From Human-in-the-Loop to Human-at-the-Helm: Navigating the Ethics of Agentic AI

Published
Score
11

Why it matters

The legal profession is shifting from reactive oversight of AI systems to proactive governance designed for autonomous tools. As artificial intelligence has evolved from generative systems that produce text on demand to agentic systems capable of independent action—sending emails, populating filings, modifying records—the traditional model of lawyers reviewing AI output after completion has become inadequate. Legal ethics experts are now calling for "human-at-the-helm" governance that establishes parameters and controls what AI is permitted to do before it acts, rather than inspecting results afterward.

The new framework uses tiered risk management. Low-stakes administrative tasks like intake routing and document organization can operate with full autonomy, while high-judgment work carrying malpractice liability remains under strict human control. Regulatory frameworks including the EU AI Act and NIST AI Risk Management Framework increasingly mandate this type of human oversight for high-risk autonomous systems. Significant governance gaps remain, particularly around data access sprawl, training data provenance, and permission accumulation across cloud and on-premises infrastructure.

Attorneys should expect this governance model to become standard practice. The shift reflects enterprise-wide challenges across legal, healthcare, and regulatory sectors. Firms implementing agentic AI now face pressure to align security, compliance, and human accountability frameworks before deployment. Those still operating under reactive review models should begin mapping which tasks genuinely require human judgment and which can safely operate autonomously—and establish controls accordingly.

mail Subscribe to Law And Technology email updates

Primary sources. No fluff. Straight to your inbox.

Also on LawSnap