AI/ML 4 min read

The FTC Is About to Call AI Bias Mitigation "Deceptive." Should You Stop Doing It?

On March 11, the FTC will issue a policy statement explaining when state laws that require AI systems to mitigate bias are preempted by federal consumer...

By Meetesh Patel

On March 11, the FTC will issue a policy statement explaining when state laws that require AI systems to mitigate bias are preempted by federal consumer protection law. The theory behind it: if an AI model is trained on real-world data and a state forces you to adjust its outputs for fairness, the federal government considers those adjusted outputs less "truthful," and therefore deceptive under Section 5 of the FTC Act (15 U.S.C. 45).

That's eight days from now. If your company runs any kind of AI bias testing or fairness auditing, you need to understand what this statement will and won't change.

What the FTC Statement Will Likely Say

The statement flows from Executive Order 14365, signed December 11, 2025, which directed the FTC to explain how the FTC Act applies to AI and when state-mandated bias mitigation counts as a "deceptive act or practice."

Expect the statement to argue that state laws like Colorado's AI Act (SB 24-205, effective June 30) and Illinois's HB 3773 compel AI developers to alter statistically accurate outputs, making those outputs misleading to consumers. Under this reading, a hiring algorithm that adjusts its scoring to reduce disparate impact would be producing "untruthful" results.

The Commerce Department will publish a companion evaluation the same day, listing state AI laws it considers too burdensome. That list will be referred to the DOJ's AI Litigation Task Force for potential legal challenges.

Why It Probably Won't Stick

Here's the problem with the theory: policy statements aren't regulations. They don't carry the force of law, and they can't create preemptive authority that doesn't already exist in the statute.

The FTC Act doesn't explicitly preempt state law. It doesn't occupy the entire field of consumer protection. That leaves only conflict preemption, which requires proving it's impossible to comply with both the federal and state requirements simultaneously. The Supreme Court applies a "presumption against preemption" in these cases, meaning the bar is high.

There's also a technical reality issue. Most state AI fairness laws don't mandate specific outputs. Colorado's law requires impact assessments and governance processes. Illinois's law creates liability for discriminatory outcomes. Neither tells you what number an algorithm must produce. Calling process requirements "output alteration" is a stretch courts are unlikely to accept.

And OpenAI itself told the White House Office of Science and Technology Policy that "federal preemption over existing or prospective state laws will require an act of Congress." If the biggest AI company in the world says the executive branch can't do this alone, that's worth noting.

The Decision Framework: What to Do Before March 11

You're running AI bias testing and mitigation programs. Don't stop. State laws are still enforceable, and 20 state attorneys general have signaled they'll sue to defend them. Dismantling your fairness program based on a nonbinding policy statement would leave you exposed on the state side with no real protection on the federal side.

You're building AI governance from scratch. Document everything. Keep records of your testing methodology, what adjustments you make and why, and the business justification for each decision. If the preemption fight goes to court, companies with well-documented, good-faith governance programs will be in the strongest position regardless of outcome.

You're in due diligence (raising, acquiring, or being acquired). Expect questions about this. Investors and acquirers will want to know how your AI compliance program handles the federal-state split. "We're ignoring state law because the FTC said so" won't survive legal review.

You're a Colorado or Illinois company. Watch whether your state appears on the Commerce Department's March 11 list. If it does, the DOJ's litigation task force will likely challenge your state's law. But until a court rules, the law is still in effect.

The March 11 statement is a political signal, not a legal shield. Build your AI governance to withstand both regimes. That's the only strategy that works whether the feds win the preemption fight or lose it.


This article is for informational purposes only and does not constitute legal advice. Consult qualified legal counsel before making compliance decisions based on the developments discussed here.

If your company is sorting through the AI compliance puzzle, Consilium Law's Outside General Counsel model gives you a dedicated legal partner who stays ahead of these shifts so you don't have to. Get in touch.

Disclaimer: This article is provided for informational purposes only and does not constitute legal advice. The information contained herein should not be relied upon as legal advice and readers are encouraged to seek the advice of legal counsel. The views expressed in this article are solely those of the author and do not necessarily reflect the views of Consilium Law LLC.

Schedule a Call