Bradley Heppner's criminal trial began yesterday in the Southern District of New York. The AI privilege ruling that came out of his case in February is the most consequential generative-AI legal development of the year, and it landed on every general counsel's desk this week.
On the same day in February, two federal courts answered the same question and reached opposite conclusions. Judge Jed Rakoff held that documents a criminal defendant generated using consumer Claude were not protected by attorney-client privilege or work product, and had to be turned over to prosecutors. A few hundred miles away, a magistrate judge in the Eastern District of Michigan held that materials a pro se plaintiff prepared using ChatGPT were protected work product, and refused to compel their production.
If you have employees using AI tools to think through legal problems, draft sensitive communications, or organize material for outside counsel, this matters more than any other AI legal development of the last twelve months. The two rulings, read together, draw a line that most companies are currently sitting on the wrong side of. And the fix isn't what most people think it is.
What happened
In United States v. Heppner, No. 1:25-cr-00503-JSR (S.D.N.Y.), Judge Rakoff ruled from the bench on February 10, 2026, and issued a written memorandum opinion seven days later on February 17 (ECF No. 27). The defendant, after receiving a grand jury subpoena and retaining counsel, had used the consumer version of Anthropic's Claude without his lawyers' direction to generate what the court described as "reports that outlined defense strategy" and "what he might argue with respect to the facts and the law." He forwarded those reports to his attorneys. The government moved to compel production. Rakoff granted the motion.
The same day, in Warner v. Gilbarco, Inc., No. 2:24-cv-12333 (E.D. Mich.), Magistrate Judge Anthony Patti ruled the other way. Warner is an employment discrimination case. The pro se plaintiff used ChatGPT to help prepare litigation materials. Defendants moved to compel production of "all documents and information concerning Plaintiff's use of third-party AI tools in connection with this lawsuit." Judge Patti denied the motion, holding that the materials qualified as work product under Federal Rule of Civil Procedure 26(b)(3)(A).
Two federal judges. Same week. Generative AI on both sides of the v. Opposite results.
What the Heppner court actually said
Rakoff rejected privilege on three independent grounds, and rejected work product on a fourth. Each one matters.
Claude is not a lawyer. The classic privilege test, traceable to Wigmore and reaffirmed in Upjohn Co. v. United States, 449 U.S. 383 (1981), requires a communication to an attorney or her agent for the purpose of obtaining legal advice. An AI tool, Rakoff held, is "obviously not an attorney." It has no law license, owes no duty of loyalty, and cannot form an attorney-client relationship. The court did not extend the Kovel doctrine, which brings non-lawyer agents under privilege when a lawyer retains them to facilitate legal advice (United States v. Kovel, 296 F.2d 918 (2d Cir. 1961)). Kovel requires the agent to be working at counsel's direction. Heppner was using Claude on his own.
The communications weren't confidential. This is the analytically heavy part of the opinion, and it does the most work. Rakoff focused on Anthropic's consumer terms of service. Section 4 permits Anthropic to use customer inputs to train its models. Section 12 reserves Anthropic's right, "at our sole discretion," to disclose user inputs and outputs in response to "governmental, court, and law enforcement requests." Rakoff treated those provisions as dispositive evidence that Heppner had no reasonable expectation of confidentiality when he typed his defense strategy into a consumer chat window. Under the third-party disclosure doctrine, voluntary disclosure to a non-privileged third party defeats privilege. Anthropic, on these contractual terms, was a third party.
He wasn't seeking legal advice. Rakoff also found that Heppner generated the materials independently and shared them with counsel as inputs to his own strategy thinking. That isn't the privilege fact pattern. Privilege protects communications made to obtain legal advice, not communications made about the law and later forwarded to a lawyer.
Work product fails too. Under Hickman v. Taylor, 329 U.S. 495 (1947), and FRCP 26(b)(3)(A), work product protects materials "prepared in anticipation of litigation by or for another party or by or for that other party's representative." Rakoff held the Heppner materials weren't prepared by counsel or at counsel's direction. They were prepared by the client, unilaterally, and the character of the document was fixed at creation. Forwarding to a lawyer afterward couldn't retroactively transform freelance preparation into work product.
The most important sentence in the opinion is one Rakoff didn't need to write but did anyway. "Had counsel directed Heppner to use Claude," he wrote, "Claude might arguably be said to have functioned in a manner akin to a highly trained professional who may act as a lawyer's agent within the protection of the attorney-client privilege." That's a Kovel opening for attorney-directed AI use, preserved as dictum, waiting for the right case.
What the Warner court actually said
Judge Patti's analysis runs on a different doctrinal track, and that's the key to reading the two cases together.
The court started with the long-standing distinction between attorney-client privilege waiver and work-product waiver. Privilege waives on voluntary disclosure to any third party. Work product waives only on disclosure to "an adversary or in a manner likely to reach an adversary." The standards aren't the same, and they were never meant to be. Hickman protects the integrity of litigation preparation; the waiver test is correspondingly narrower.
On that framework, Judge Patti held: "ChatGPT (and other generative AI programs) are tools, not persons, even if they may have administrators somewhere in the background." Using ChatGPT to draft a brief is, in his view, no more a waiver than using Word, a search engine, or a legal research database. There's no disclosure to an adversary. There's no path by which the inputs are likely to reach an adversary. So work product survives.
Are Heppner and Warner actually in conflict?
Read carefully, Heppner and Warner could both be right on the same facts.
Heppner is a privilege case decided on the third-party disclosure doctrine. Privilege requires confidentiality, confidentiality requires no third-party recipient, and Anthropic's consumer terms make Anthropic a third-party recipient with disclosure rights. Privilege fails.
Warner is a work-product case decided on the adversary-disclosure waiver standard. Work product survives unless materials are disclosed to an adversary or in a manner likely to reach one. ChatGPT use clears that bar. Work product survives.
You could apply both rulings to the same set of AI-generated materials and get a perfectly coherent answer: privilege fails, work product survives. The doctrinal disagreement isn't really about AI. It's about whether characterizing the AI provider as a "third party" for privilege purposes also makes it a route to "an adversary" for work-product purposes. The two courts answered that question differently, and that's the seam future cases will be litigated in.
The likely synthesis when the circuits eventually weigh in: the contract controls. If the vendor agreement preserves confidentiality and restricts disclosure, the AI is a tool and the analysis runs Patti's way. If the consumer terms reserve disclosure rights and training use, the AI is a third-party recipient and the analysis runs Rakoff's way. The technology is the same. The contract is what changes.
What does opting out of AI training actually do for privilege?
A common first instinct after these rulings is to tell employees to opt out of training use in their account settings. It's the wrong fix, and it's worth being precise about why.
Anthropic's consumer terms have two independent provisions that destroyed confidentiality in Heppner. They do different doctrinal work, and they aren't equally vulnerable to user-side controls.
Section 4 (training use) is the one opt-out addresses. If you opt out, Anthropic stops using your inputs to train models, most of the time. The carve-outs matter: even with opt-out, Anthropic reserves training use for content you submit as feedback and content flagged for safety review. Opt-out isn't absolute on its own terms.
Section 12 (government and law enforcement disclosure) is the one opt-out doesn't touch at all. Section 12 reserves Anthropic's right to disclose inputs, outputs, and actions to governmental, court, and law enforcement requests "at our sole discretion." There's no consumer-account toggle that turns this off. It's a non-negotiable term of the consumer relationship.
Section 12 does the heavy lifting under traditional privilege analysis. The third-party disclosure doctrine asks whether the client maintained a reasonable expectation of confidentiality. When the contractual counterparty has expressly reserved the right to hand your communications to the government, the answer is no. That answer doesn't change because you also turned off training use.
A clean three-tier framework:
Tier 1: Consumer plan, default settings. Worst posture. Training use and government disclosure are both live. Heppner applies cleanly. Treat anything employees put into this kind of tool as discoverable.
Tier 2: Consumer plan with training opt-out. Marginally better. Training-use exposure is reduced but not eliminated. Government disclosure under Section 12 is unchanged. A court applying Heppner reasoning will still find no reasonable expectation of confidentiality. Opt-out is a defense in depth, not a privilege fix.
Tier 3: Enterprise contract with confidentiality, no-training, and disclosure-notice provisions. This is the posture Rakoff signaled "might" change the analysis. Standard enterprise SaaS contract terms (running confidentiality obligations from the vendor to the customer, contractual no-training defaults that aren't user-toggleable, restrictions on third-party disclosure with notice and protective-order requirements, and DPA-style processor language) put the relationship on a different footing. Combined with attorney-directed use under a Kovel-style arrangement, this is the strongest available position under current law.
The headline for general counsel: stop telling employees to "just opt out." It gives them a false sense of safety on the issue that matters most. The fix has to be at the contract layer.
What I'd tell a CEO this week
The action set is short and unglamorous.
First, inventory what AI tools your employees are actually using, including the ones you didn't approve. Most companies underestimate this by half. Personal-account use of consumer tools on work matters is the biggest exposure most companies have right now and the one they have the least visibility into.
Second, pick one enterprise-tier tool per use case and consolidate. Fewer vendors, better contracts, easier to defend. Make sure the contract you sign actually contains the four provisions that matter: vendor confidentiality obligation, no-training default that isn't user-toggleable, third-party disclosure restrictions with customer notice, and processor-style data handling language.
Third, write a one-page acceptable use policy that names approved tools, prohibits consumer accounts for work data, and gives examples of what not to paste. The examples matter more than the rules.
Fourth, build an attorney-directed use protocol for legal-sensitive AI work. If your in-house counsel or outside firm wants the Kovel opening Rakoff left in Heppner to be available, the lawyer needs to be the one selecting the tool, instructing the use, and treating the outputs as part of her own work product. That doesn't happen by accident.
Fifth, address the litigation hold gap. If consumer AI chats are discoverable, they're also subject to preservation obligations once litigation is reasonably anticipated. Most companies have no mechanism to preserve ChatGPT conversation history on personal employee accounts. That's a sanctions exposure waiting to happen, separate from the privilege question.
Frequently Asked Questions
Does the Heppner ruling apply to my company if we're not in litigation?
Yes. The ruling sets the discoverability standard for any AI chat your employees create on consumer plans. If litigation, an investigation, or a regulatory inquiry arrives later, those chats can be subpoenaed and the privilege defense Heppner tried to raise won't be available. The exposure exists from the moment the chat is created, not from the moment a subpoena lands.
Does opting out of training in my Claude or ChatGPT account settings protect attorney-client privilege?
No, not on its own. Consumer terms of service contain two independent provisions that destroy confidentiality: training use (which opt-out can address) and the provider's reserved right to disclose user inputs to government and law enforcement requests (which opt-out cannot touch). A court applying the Heppner reasoning will still find no reasonable expectation of confidentiality even if you opted out of training.
What's the difference between consumer and enterprise AI plans for privilege purposes?
Consumer plans operate under terms that permit training use and reserve broad third-party disclosure rights. Enterprise plans (like Claude for Work, Claude Enterprise, or ChatGPT Enterprise) are governed by negotiated contracts that typically include vendor confidentiality obligations, contractual no-training defaults, and restrictions on third-party disclosure. Judge Rakoff signaled in Heppner that enterprise terms might change the privilege analysis. Consumer terms don't.
If I tell my lawyer to use AI for me, is that protected?
Possibly, and this is the most important open question. Heppner explicitly preserved a Kovel-style argument: if counsel directs the AI use, selects the tool, and treats outputs as part of her own work, the AI might function as an agent of the lawyer for privilege purposes. No federal court has yet applied that framework to AI tools. The first case to test it will set the operational template for attorney-directed AI use.
Do I need to do anything this week?
If you have employees using consumer AI accounts on work-related matters, yes. Three things: inventory which tools are actually being used (most companies underestimate by half), restrict consumer-account use of work data through a written acceptable use policy, and start migrating critical legal-sensitive AI work onto an enterprise contract. The legal exposure from waiting another quarter is meaningfully higher than the operational cost of moving now.
What we're watching
- The first appellate decision. Heppner and Warner are district court rulings. The Second and Sixth Circuits will eventually weigh in, and the contract-controls synthesis is the most likely landing spot. - The next attorney-directed use case. Rakoff's Kovel dictum is a roadmap. Some defense lawyer is going to test it on facts where counsel can credibly say she selected the tool, directed the use, and treated the outputs as her own work. That's the case to watch for the operational template. - Insurance and renewal questionnaires. D&O and cyber carriers don't currently address AI-discovery scenarios. Expect questions about AI governance in the next renewal cycle. Companies with documented programs will have an easier conversation. - State court adoption. State trial courts will start citing Heppner in civil discovery disputes. The work-product analysis will be the most contested piece, because state work-product standards vary.
Close
The two rulings together don't tell us that AI is too dangerous to use. They tell us that the contract controls, and that most companies are using the wrong contract. Fixing that is a small operational lift. Not fixing it is a bet that no one will ever subpoena your chat history. After this month, that bet got more expensive.
This article is for informational purposes only and does not constitute legal advice. Every company's situation is different, and you should consult with qualified legal counsel before making compliance decisions based on the developments discussed here.
Building an AI governance program that holds up under discovery, including vendor contract review, acceptable use policies, and attorney-directed use protocols, is the kind of work growth companies are increasingly asking outside general counsel to lead as AI risk becomes a board-level question.