A startup founder told his investors that artificial intelligence powered his company's product. The product actually ran on human workers in the Philippines. He raised $42 million on that story. Now he's facing criminal charges that could put him in prison.
That's not a hypothetical. On April 9, 2025, the SEC and the DOJ filed parallel civil and criminal cases against Albert Saniger, the founder of Nate Inc., a shopping app startup. The charges: securities fraud and wire fraud. The claim: Saniger told venture capital investors his app used machine learning and neural networks to automate online purchases. The reality: virtually every purchase was completed by hand.
If you're raising capital right now and your pitch deck uses the words "AI-powered," "machine learning," or "automated," you need to understand how fast enforcement has shifted. Eighteen months ago, the SEC's first AI-washing action resulted in a combined $400,000 fine split between two small firms. Today, the same conduct can land you in federal court defending criminal charges.
What is AI-washing and why does the SEC care?
AI-washing is what happens when a company exaggerates or fabricates its use of artificial intelligence to attract investment, customers, or market attention. Greenwashing's tech cousin. Claims about AI capabilities that don't match operational reality.
The SEC cares because these claims move money. When a founder tells investors that AI powers their product, that representation influences investment decisions. If it's false, it's securities fraud under the same statutes that have applied to every other form of investor deception for decades.
There's no special "AI disclosure rule." SEC Chairman Paul Atkins has said plainly that he doesn't think one is necessary. At the SEC's Investor Advisory Committee meeting in December 2025, Atkins said the agency's "principles-based rules were intentionally designed to allow companies to inform investors of material impacts of any new development, including AI." He doubled down at a March 2026 FSOC roundtable: "Prescriptive mandates are not the answer to every emerging technology."
Translation: the SEC won't give you a checklist of what to disclose about AI. But if you lie about it, the existing anti-fraud rules are more than enough to come after you.
The enforcement escalation: four cases in 18 months
What makes this moment different is speed. The SEC went from its first AI-washing enforcement action in March 2024 to a parallel criminal prosecution in April 2025. Each case pushed the boundary further.
Delphia and Global Predictions (March 2024)
The SEC's opening move was modest. Two investment advisers, Delphia and Global Predictions, settled charges for making false AI claims in their marketing materials. Delphia told clients it used "machine learning" to analyze their data and make investment decisions. Global Predictions called itself the "first regulated AI financial advisor." Neither actually had the AI capabilities they advertised.
The penalties were small: $225,000 for Delphia, $175,000 for Global Predictions. Charges were based on the Investment Advisers Act's marketing rule, Section 206. No individual executives were named.
But it established the principle: false AI claims in marketing materials trigger securities enforcement.
Presto Automation (January 2025)
The SEC moved to a public company. Presto Automation, a Nasdaq-listed restaurant technology company, told investors its "Presto Voice" product used AI to automate drive-through ordering. The product, Presto said, "eliminated the need for human order taking."
That was false. Over 70% of orders required human intervention. At some locations, 100%. Those humans were contract workers in the Philippines and India. Presto also told investors the technology was proprietary. For its first year, the underlying system was actually owned and operated by a third party.
Two things stand out here. The SEC charged negligent fraud, not intentional fraud. You don't have to deliberately lie. Failing to check whether your AI claims are accurate is enough. And the SEC specifically faulted Presto for having no disclosure controls. Nobody at the company was formally responsible for making sure AI claims in SEC filings matched reality.
No financial penalty was imposed because Presto cooperated. But the precedent was set: even unintentional overclaiming, combined with weak internal controls, triggers enforcement.
Nate Inc. (April 2025)
This is where it turned criminal.
Albert Saniger founded Nate in 2018 as a mobile shopping app. The pitch: click "buy," and Nate's AI handles the entire checkout process using machine learning and neural networks. He raised $42 million from venture capital firms on that story.
According to the SEC complaint and the DOJ indictment, the app's automation rate was "essentially zero." Virtually all purchases were completed by contract workers in the Philippines and Romania. Saniger allegedly knew this and took active steps to hide it.
The concealment was systematic. Saniger directed employees to run fake product demos for investors where workers manually processed orders behind the scenes to make it look automated. He instructed overseas contract workers to remove any reference to Nate from their social media profiles. When potential investors ran test transactions, Saniger's team allegedly prioritized those orders so investors would experience fast, seemingly automated service.
After The Information reported in June 2022 that Nate's AI claims were false, the company couldn't raise more money and shut down in 2023. Investors lost tens of millions. Saniger allegedly sold roughly $3 million of his own shares during a 2021 fundraising round.
The DOJ charged securities fraud under Section 10(b) of the Securities Exchange Act and wire fraud under 18 U.S.C. The SEC filed a parallel civil complaint under Section 10(b) and Section 17(a) of the Securities Act.
Worth noting: this happened under the current administration, not the prior one. AI fraud against investors isn't a partisan enforcement priority. It's just enforcement.
Mozaic Payments (November 2025)
The most recent case is also the most extreme.
Marcus Cobb, former CEO of Mozaic Payments, was indicted in Boston for wire fraud conspiracy after allegedly fabricating every material aspect of his company to secure $20 million from Volition Capital, a Boston-based private equity firm.
Mozaic marketed itself as an AI-powered platform for processing automated royalty-split payments in the entertainment industry. Neither the app nor its API actually functioned. No revenue. Zero real customers. Cobb allegedly created fictitious clients, fake testimonials, falsified financials, and doctored bank records.
Nearly all of the $20 million went to lavish travel and entertainment. Co-founder Rachel Knepp pleaded guilty in November 2025. Cobb faces up to 20 years.
Here's the distinction that matters: Nate had a real product that was far less automated than advertised. Mozaic allegedly had no functioning product at all. The enforcement spectrum now covers everything from exaggerated automation claims to outright fabrication.
The SEC built a team for this
These aren't one-off cases.
In February 2025, the SEC created the Cyber and Emerging Technologies Unit (CETU), a team of roughly 30 fraud specialists and attorneys focused on, among other things, "fraud committed using emerging technologies, such as artificial intelligence and machine learning." CETU replaced the former Crypto Assets and Cyber Unit. The renaming tells you where the SEC thinks the next wave of fraud is coming from.
CETU's leadership has publicly called rooting out AI-washing fraud an "immediate priority."
On the examination side, the SEC Division of Examinations named AI as a key priority for fiscal year 2026. Examiners will review whether representations about AI capabilities are accurate and whether AI-driven recommendations match what companies tell their investors.
And then there's private litigation. Securities class actions targeting alleged AI misrepresentations doubled between 2023 and 2024. Through the first half of 2025, Stanford Law School's Securities Class Action Clearinghouse identified 53 AI-related class actions. The median settlement in resolved cases is $11.5 million. The average is $38.4 million.
Even if the SEC doesn't come after you, shareholders can. And the plaintiffs' bar is actively looking for companies whose AI claims don't hold up.
Where the line is
Based on the four enforcement cases and the SEC's public statements, here's how I'd frame the risk for a founder sitting across from me.
Over the line
Claiming AI does something that humans actually do. This is the core fact pattern in Nate and Presto. If your product requires substantial human intervention to function but you tell investors it's automated or AI-powered, you've crossed the line. It doesn't matter if you intend to build the AI eventually.
Faking demos. Having employees manually process transactions behind the scenes during investor product demos is affirmative fraud. Central to the Nate case.
Fabricating metrics. Reporting "automation rates" or "AI completion rates" that don't reflect actual operational data. Presto's "non-intervention" metrics created a false impression of autonomous operation.
Misrepresenting technology ownership. Calling third-party technology "proprietary" or "our technology" when you're licensing or white-labeling someone else's system.
Gray zone
Aspirational claims without timeline qualifiers are risky. Saying "our AI will handle X" without specifying when, or without disclosing that it currently can't, falls into territory the SEC hasn't fully defined. But the Nate case suggests that if you're raising money on the aspiration, you need to be transparent about where you actually are.
There's also the puffery question. General statements like "we're building the future of AI-powered logistics" are probably fine. Specific claims like "our AI processes 95% of orders without human involvement" are testable, material, and enforceable.
And partial automation. If AI handles some tasks and humans handle others, disclosure of the split matters. Presto got in trouble not because humans were involved, but because the company claimed they weren't.
Safer ground
Honest disclosure of hybrid systems. "Our platform combines AI models with human review to ensure accuracy" is transparent and defensible.
Specific, verifiable performance claims backed by data. If you say your AI achieves a 92% accuracy rate and you can demonstrate that with internal data and methodology, you're on solid ground.
Clear distinction between current capabilities and development roadmap. "Today, our AI handles intake and classification. Our roadmap includes automated decisioning by Q3 2026." That's transparent about both the present and the future.
What to do
1. Audit your pitch deck and investor materials this week. Search every slide for "AI," "machine learning," "automated," "neural network," "algorithm," and "proprietary." For each claim, ask: can we prove this is true right now, with data? If the answer is no, rewrite it.
2. Review your product demos. If any part of your demo involves manual processing that looks automated, fix it immediately. The Nate case shows that staged demos are treated as affirmative fraud.
3. Assign someone to own AI claims accuracy. Someone at your company needs to be responsible for the accuracy of AI-related statements in investor materials, marketing, SEC filings, and press releases. Presto's lack of any disclosure controls was specifically cited by the SEC as a violation. This is a process fix, not a hire.
4. Document your automation metrics honestly. If your product involves human-in-the-loop processes, measure and document the actual split. What percentage of tasks does AI handle? What requires human intervention? Keep this data current so you can produce it if questioned.
5. Brief your board at the next meeting. Directors need to understand that AI claims in fundraising materials create securities liability. Include it in your risk discussion. Board-level awareness is a governance requirement, and it's also a defensive asset if claims are later challenged.
6. Check your Form D and offering documents. If you've filed with the SEC under Regulation D, review the business description for AI claims that don't match current capabilities. Inaccurate filings create exposure even when you're raising under an exemption.
7. Add a counsel review step for investor-facing claims. Founders often write pitch decks alone. Add a review step where someone with securities law awareness checks AI-related claims against operational reality before the deck goes out.
8. Prepare for tougher AI due diligence. VCs and PE firms are increasingly asking for technical validation of AI claims. Be ready to provide code access, demonstrate live systems, and share real usage data. Investors who got burned by Nate and Mozaic are going to dig deeper. That's not paranoia. It's pattern recognition.
What we're watching
Nate Inc. trial timeline. As of mid-2025, the SEC was still trying to serve Saniger in Spain. The criminal case timeline will signal how aggressively the DOJ pursues these matters.
CETU's next moves. A 30-person unit formed in February 2025 will produce more cases. The next wave could hit larger public companies or fund managers, not just startups.
Private litigation outcomes. With 53 AI-related class actions filed through H1 2025 and a median settlement of $11.5 million, court rulings on pleading standards will shape the risk picture. The IonQ and C3.ai cases are worth tracking.
SEC advisory committee AI disclosure recommendation. Chairman Atkins rejected it in December 2025, but if enforcement actions keep coming, pressure for specific AI guidance will build.
State attorneys general. They have separate authority to pursue AI fraud claims. Watch for state enforcement where the SEC's lighter-regulation approach leaves gaps.
A year and a half ago, claiming your product was AI-powered when it wasn't cost you $175,000. Today it can cost you your freedom. The SEC and DOJ aren't slowing down. They're hiring more people and building more cases.
If your AI claims are accurate, none of this changes anything for you. If they're not, fix them now. Before someone else decides to check.
This article is for informational purposes only and does not constitute legal advice. Every company's situation is different, and you should consult with qualified legal counsel before making compliance decisions based on the developments discussed here.
If your company is raising capital and wants to make sure your investor-facing materials meet current enforcement standards, Consilium Law's Outside General Counsel practice can help you audit AI claims and build the disclosure controls the SEC expects to see.