In Study, AI Helped Law Students Without Hurting Reasoning

Published
Score
9

Why it matters

A randomized controlled trial of 127–137 law students from the University of Minnesota and University of Michigan found that two advanced AI tools—vLex's Vincent AI and OpenAI's o1-preview—significantly improved legal work quality and speed without degrading analytical reasoning. Students using these tools completed six realistic tasks (drafting persuasive letters, analyzing complaints, and similar exercises) and produced higher-quality work on four of the six assignments, with productivity gains ranging from 34 to 140 percent. The o1-preview model showed larger improvements overall. The study, led by Daniel Schwarcz at University of Minnesota Law School and Sam Manning at the Centre for the Governance of AI, was published on SSRN on April 13, 2026.

The results mark a departure from earlier research on older models like GPT-4, which showed mixed or negative effects on top-performing students. The specific performance metrics for each task and detailed breakdowns of where the tools succeeded or fell short remain unpublished beyond the summary findings.

Practitioners should note the timing and scope of this research. The study tests newer AI architectures—retrieval-augmented generation for source accuracy and reasoning models for structured analysis—that address hallucination and reasoning deficits documented in prior work. With 55 percent of law schools now offering AI courses and firms deploying tools like Westlaw Edge and Lexis+ AI, this evidence of safer, more capable AI integration carries weight for legal education and junior associate training. Firms considering AI adoption for associate work should track whether these findings hold across different practice areas and experience levels.

mail

Get notified about new Artificial Intelligence developments

Primary sources. No fluff. Straight to your inbox.

See more entries tagged Artificial Intelligence.

Also on LawSnap