AI/ML 10 min read

Your AI Hiring Tool Might Be an Unlicensed Credit Bureau: The Lawsuits That Could Change How You Recruit

A new FCRA lawsuit against Eightfold AI, a collective action against Workday, and two state laws now in effect are creating real liability for any company using AI in hiring. Here's what to do about it.

By Meetesh Patel

On January 20, a proposed class action in California accused Eightfold AI of operating as an unlicensed consumer reporting agency under the Fair Credit Reporting Act. The allegation: Eightfold's platform scrapes 1.5 billion data points from LinkedIn, GitHub, and other public sources to score job candidates on a 0-to-5 scale, without telling any of them it's happening.

Eightfold's client list includes Microsoft, Morgan Stanley, Starbucks, PayPal, Chevron, and roughly a third of the Fortune 500. If you're using an AI hiring tool, there's a decent chance it works the same way.

That lawsuit didn't land in isolation. In May 2025, a federal judge in the Northern District of California granted preliminary collective action certification against Workday, finding that the company's algorithmic screening tools may disparately impact older workers. In a prior ruling, the same judge found that Workday's AI "participates in the decision-making process" for hiring, making the company a potential agent of every employer using its platform. Meanwhile, Illinois made AI employment discrimination explicitly actionable on January 1, and California's automated decision system rules have been in effect since October.

Four developments, three legal theories, one conclusion: if you're using AI to screen, rank, or filter job candidates, you're carrying liability you probably haven't priced in.

The FCRA theory: your vendor might be a credit bureau

The Eightfold lawsuit isn't a typical employment discrimination case. It's a Fair Credit Reporting Act case, and the theory is straightforward: if an AI platform compiles personal data from third-party sources and sells assessments that employers use for hiring decisions, that platform is a "consumer reporting agency" under 15 U.S.C. ss 1681.

That classification carries real obligations. Consumer reporting agencies have to ensure accuracy, give people access to their reports, and let them dispute errors. Employers using those reports have to tell applicants and follow adverse action procedures before rejecting someone based on the report.

None of that is happening with AI hiring tools. Candidates don't know they're being scored. They can't see the score. They can't dispute it. And when they get rejected, nobody tells them the AI assessment was a factor.

Here's what makes this theory dangerous for employers, not just vendors: the FCRA doesn't just regulate the agencies. It regulates the users of consumer reports too. If a court holds that Eightfold's assessments are consumer reports, every company that used those assessments without FCRA-compliant disclosures has its own exposure. Under 15 U.S.C. ss 1681n, willful violations carry statutory damages of $100 to $1,000 per applicant, plus punitive damages and attorneys' fees. Scale that across thousands of rejected candidates and the numbers get serious fast.

The Eightfold case is still in its early stages. A motion to dismiss is likely in Q1 2026. But the FCRA theory doesn't depend on proving discrimination. It depends on proving that a company compiled and sold personal data assessments without following the rules. That's a much easier case to make.

The agent theory: your vendor's bias is your bias

Mobley v. Workday, Inc., No. 3:23-cv-00770 (N.D. Cal.), went further than most employment lawyers expected.

The case started with a motion to dismiss in 2024. Judge Lin's July 2024 ruling rejected Workday's attempt to get the case thrown out and established the key legal theory: Workday's software "is not simply implementing in a rote way the criteria that employers set forth" but is "participating in the decision-making process." That language matters because it opens the door to treating Workday as an agent of the employers using its platform for purposes of the Age Discrimination in Employment Act, Title VII, and the Americans with Disabilities Act.

In May 2025, Judge Lin granted preliminary collective action certification under the ADEA. The plaintiff, Derek Mobley, claims he applied to over 100 jobs through Workday's platform and was systematically rejected because the algorithm penalized characteristics correlated with age. He received one rejection at 1:50 a.m., less than an hour after submitting his application.

If Workday is an agent, employers can't defend themselves by saying "we just used the vendor's tool." The vendor's discriminatory output becomes the employer's discriminatory act.

The case is now in discovery, with a ruling expected sometime in 2026. Workday is used by over 11,000 companies worldwide, and the case could affect hundreds of millions of job seekers who've been screened through its platform. Even if the final collective is narrower than the preliminary certification suggests, the legal theory is out there. And it applies to every AI hiring vendor, not just Workday.

One counterpoint: Workday will argue, and some courts may agree, that a software vendor isn't an "agent" in the traditional sense. The company doesn't make hiring decisions; it provides tools that employers configure. But Judge Lin wasn't buying that distinction at the certification stage, and the trend in AI liability cases is toward holding the humans (and companies) accountable for the tools they choose to use.

Two states aren't waiting for the courts

While the federal cases work through discovery and motions, Illinois and California have already changed the law.

Illinois: live now, no cure period

Illinois HB 3773, effective January 1, 2026, amends the Illinois Human Rights Act to make AI-driven employment discrimination explicitly actionable. The key features:

Disparate impact liability applies. You don't need to prove the employer intended to discriminate. If the AI produces discriminatory outcomes, that's enough. That includes using geographic data or other proxies that correlate with protected characteristics.

Employers must notify employees when AI is used in employment decisions, meaning hiring, promotion, discipline, and termination.

Vendors face accountability too. If your HR tech provider's tool produces discriminatory outcomes, that's your problem under HB 3773.

Enforcement runs through the Illinois Department of Human Rights, the Attorney General's office for pattern-or-practice cases, and private right-to-sue letters. Unlike Colorado's AI Act, which was delayed to June 30 under federal pressure, Illinois has no cure period. There's no grace window to fix violations before enforcement begins.

California: testing is now evidence

California's automated decision system regulations under FEHA took effect October 1, 2025. The regulations create a new dynamic: anti-bias testing (or the lack of it) is now legally relevant evidence in discrimination cases.

That cuts both ways. If you tested your AI hiring tool and found bias, you need to show you corrected it. If you didn't test at all, that's evidence a court can use against you in a disparate impact claim.

Other provisions: both employers and vendors face liability. Data retention for automated decision system records extends to four years. Criminal history screening through AI is restricted pre-offer. And the rules apply to any employer with five or more employees.

California also has SB 7, the "No Robo Bosses Act," pending in the legislature. If passed, it would require human oversight of all significant AI employment decisions. It hasn't passed yet, but it signals where the regulatory direction is heading.

How this shows up in your deals and operations

If you're hiring, acquiring companies, or building HR tech products, this liability wave changes your playbook.

Vendor contracts need rework. Your agreements with AI hiring vendors should now include FCRA compliance representations, anti-bias testing obligations, data retention commitments, and indemnification for discriminatory outcomes. If your vendor won't agree to bias testing, that tells you something.

M&A diligence is expanding again. If you're acquiring a company that uses AI hiring tools, your diligence list just got longer. You need to know what tools they use, whether those tools have been tested for bias, whether the company has FCRA-compliant disclosure processes, and what their exposure looks like under Illinois and California law. A target that's been using an untested AI screener across 50,000 applicants isn't just a compliance issue. It's a liability.

Cyber insurance questionnaires will follow. Underwriters haven't caught up yet, but they will. Expect questions about AI hiring tool usage, bias testing, and FCRA compliance in your next renewal cycle.

Board reporting should include AI hiring risk. If your company screens candidates through any AI-powered tool, your board or audit committee should know the basics: what tool, what it does, whether it's been tested, and what your exposure looks like under the theories described above. This isn't a CISO issue. It's a legal and governance issue.

And if you're building AI hiring tools: the courts and state legislatures are done waiting. Bias testing, transparency, and FCRA compliance aren't optional anymore.

Practical takeaways

1. Inventory your AI hiring tools this week. Every tool that scores, ranks, screens, or filters candidates needs to be on a list. Include resume screeners, chatbot pre-screens, video interview analysis, and any tool that uses external data sources.

2. Ask your vendors for bias audit results. If they don't have them, that's your answer. Under California law, the absence of testing is itself evidence in a disparate impact claim. Under Illinois law, you're on the hook for your vendor's discriminatory outcomes.

3. Review your applicant disclosures for FCRA compliance. If any of your tools compile data from third-party sources to generate candidate assessments, you may already need FCRA disclosures and adverse action notices. Get a legal opinion before the Eightfold case establishes the precedent.

4. Update vendor contracts with AI-specific provisions. Add representations on bias testing, FCRA compliance (if applicable), data retention, and indemnification for discriminatory outcomes. If you're in Illinois or California, add state-specific compliance obligations.

5. Add AI hiring risk to your M&A diligence checklist. For any target using AI in recruitment, diligence should cover tool inventory, bias testing history, FCRA compliance posture, and exposure estimates under current state laws.

6. Brief your board or audit committee. Criminal penalties aren't in play here (unlike CIRCIA), but class action exposure is. A single tool used across thousands of applicants creates class-wide liability. Your directors should understand the risk profile.

7. Watch the Colorado clock. The Colorado AI Act takes effect June 30, 2026, with civil penalties up to $20,000 per violation. If you have Colorado operations and haven't started impact assessments, you're behind.

8. Build notification processes for Illinois. HB 3773 requires employers to tell employees when AI is used in employment decisions. If you don't have a notification workflow, build one now.

What we're watching

Eightfold AI motion to dismiss, expected Q1 2026: If the court allows the FCRA theory to proceed past the pleading stage, this case reshapes the entire AI hiring vendor market. Every AI recruiter will need to decide whether it's a consumer reporting agency.

Mobley v. Workday discovery phase, ruling expected 2026: The scope of the certified class and the discovery findings will signal how courts treat AI vendor agent liability going forward.

Colorado AI Act enforcement begins June 30, 2026: The most comprehensive state AI employment law takes effect with $20,000-per-violation penalties. Failing to notify 10 rejected applicants could hit $200,000.

California SB 7 (No Robo Bosses Act): Pending legislation requiring human oversight of significant AI employment decisions. If passed, it would be the most restrictive state law on AI in hiring.

Illinois IDHR rulemaking on AI notice requirements: The department is developing draft rules on what adequate AI notification looks like under HB 3773. These will define the compliance baseline.

The legal system is catching up to AI hiring through three channels at once: federal litigation, the FCRA, and state legislation. Companies that built their recruiting stack around AI tools without thinking about liability are about to find out what that costs. The ones that audit, test, and fix their processes now will be in much better shape when the first ruling lands.

Disclaimer: This article is provided for informational purposes only and does not constitute legal advice. The information contained herein should not be relied upon as legal advice and readers are encouraged to seek the advice of legal counsel. The views expressed in this article are solely those of the author and do not necessarily reflect the views of Consilium Law LLC.

Schedule a Call