New FCRA Class Action Expands Potential Scope of AI Litigation
In January 2026, job applicants Erin Kistler and Sruti Bhaumik filed a class action against Eightfold AI Inc., alleging that the company’s AI hiring platform operated as an unregistered consumer reporting agency under the Fair Credit Reporting Act (FCRA). Unlike most AI-related litigation, which tends to focus on algorithmic bias, this case targets the privacy implications of AI. The plaintiffs allege that Eightfold AI compiled and used personal data without providing adequate disclosures, obtaining consent, or offering mechanisms to dispute the information collected. If the plaintiffs succeed, statutory damages and private rights of action under the FCRA could make this a high-stakes precedent.
What Is the Eightfold AI Class Action Lawsuit About?
In Kistler et al. v. Eightfold AI Inc., the complaint asserts that Eightfold AI’s hiring and screening platform collects extensive applicant data – including social media profiles, LinkedIn histories, location data, and online activity – to create individualized “likelihood of success” assessments. These profiles, generated by an AI model trained on billions of data points, are then used to rank candidates before any human review. The plaintiffs claim this process occurs without the disclosures, authorizations, or adverse action notices the FCRA requires.
Why the Eightfold AI Case Matters for Employers Using AI
The Eightfold AI case underscores a critical point: even decades-old statutes not written with AI in mind can be applied to modern technology. In many ways, this mirrors how courts have interpreted the California Invasion of Privacy Act in the context of Internet, cookies and pixels. This is yet another reminder that businesses using third-party AI platforms may face liability even if they do not control or fully understand the algorithms they use.
How Employers Can Reduce AI Hiring Litigation Risk
For businesses using AI in hiring or other decision-making:
- Be Proactive: Maintain a clear inventory of AI systems, understand the scope and capability of the AI systems, define accountability, and implement monitoring and periodic testing.
- Vendor contracts matter: Negotiate AI-specific terms covering regulatory compliance, audit rights, and liability/indemnification.
- Documentation: Keep records of AI training, testing, and oversight to demonstrate compliance.
Even as state AI statutes emerge, litigation under existing laws like the FCRA may pose the most immediate risk. For businesses using AI in hiring, now is the time to take a closer look at internal practices. That includes reviewing current policies, negotiating vendor agreements, and strengthening practices to better manage these evolving legal risks.
Contact the author, Linda Wang, Partner & Co-Chair of CDF’s Privacy Practice Group, with questions regarding this blog or to inquire about a consultation on your organization's AI policies and procedures.