The healthcare dispute between ARIHQ and Santé Québec was arbitrated in Montreal. The arbitrator relied on generative AI to draft the award, though the arbitrator's identity has not been publicly emphasized in coverage of the ruling.
This decision breaks new ground in North American jurisprudence on AI misuse in legal proceedings. Prior rulings have sanctioned lawyers and litigants for filing hallucinated content; Sheehan's decision targets a decision-maker instead. The court identified five systemic AI risks, including hallucinations and the absence of human discretion in weighing community values and contextual circumstances. The ruling establishes that while peripheral AI use may be permissible, reliance on AI-generated legal foundations that undermine the integrity of reasoning warrants annulment—both because it affects the case outcome and because it erodes public confidence in arbitration itself. Practitioners should expect courts to scrutinize how arbitrators and judges incorporate AI into substantive legal analysis.