The case, Kistler et al. v. Eightfold AI Inc., was brought by named plaintiffs Erin Kistler and Sruti Bhaumik, alongside former EEOC chair Jenny R. Yang and the nonprofit Towards Justice. Eightfold's platform is used by major employers including Microsoft, PayPal, Morgan Stanley, Starbucks, Chevron, and Bayer. The core legal question is whether Eightfold qualifies as a "consumer reporting agency" under federal law, which would trigger strict FCRA requirements including disclosure, access rights, and error-dispute mechanisms. According to the complaint, Eightfold's Large Language Model incorporates over 1.5 billion data points from profiles of more than one billion workers, drawing from LinkedIn, job histories, publications, and tracking data without applicant knowledge.
This case reframes AI hiring liability around data privacy and transparency rather than discrimination—a legal strategy that sidesteps the difficult task of proving algorithmic bias. The timing matters: Colorado's AI Act takes effect June 30, 2026 and requires risk assessments and transparency notices for employment AI; Illinois amended its Human Rights Act effective January 1, 2026 to cover AI-mediated discrimination; and California enforced Automated Decision Systems regulations. FCRA statutory damages of $100–$1,000 per willful violation, multiplied across a billion-person database, create massive financial exposure. Attorneys should monitor whether courts accept the FCRA framework as a viable path to AI hiring tool accountability.