31 Documents. One Privacy Policy. Why the AI Your Legal Team Uses Is Now a Legal Liability.
Two federal rulings 17 days apart drew a bright line: consumer AI used independently is discoverable. The deciding factor isn't capability — it's what your vendor's privacy policy says when prosecutors ask.
A former CEO received a federal grand jury subpoena. Before his arrest, he opened Anthropic's consumer Claude, described his situation, and generated 31 documents analyzing his legal exposure. He likely thought this was private. Judge Jed Rakoff of the Southern District of New York handed all 31 to federal prosecutors last week.
Our take
Enterprise teams have spent three years choosing AI tools based on one question: does this do what we need?
That question is no longer enough.
The Heppner ruling adds a second question that every legal, compliance, and IT leader needs answered before their team opens a chatbot: what does this vendor's privacy policy say when the government asks for our data?
This matters now because the logic Rakoff applied is simple. Simple enough to understand in two minutes. Simple enough for opposing counsel, federal prosecutors, and post-M&A litigators to apply to your team's AI usage right now. The gap between "enterprise AI governance" as a line item and "enterprise AI governance" as a legally defensible architecture has never been more visible — or more expensive.
The context
United States v. Heppner (SDNY, Feb. 27, 2026)
Bradley Heppner, former CEO of Beneficient Company Group, was indicted in October 2025 for securities fraud and wire fraud. After receiving a grand jury subpoena — but before his arrest — he used the consumer version of Anthropic's Claude to analyze his legal exposure. He generated 31 documents. He argued they deserved attorney-client privilege or work product protection.
Rakoff rejected both claims on four grounds:
No attorney-client relationship exists with an AI tool. Claude is not a licensed attorney. The relationship the privilege protects cannot form with software that explicitly disclaims providing legal advice when asked.
No reasonable expectation of confidentiality. Anthropic's consumer privacy policy explicitly permits disclosure of inputs and outputs to third parties, including government authorities. Users who accept those terms cannot later claim confidentiality. The policy was used against Heppner in open court.
Claude said so itself. When Heppner asked Claude for legal advice, Claude disclaimed it. That disclaimer appeared in Rakoff's ruling.
Work product doctrine does not apply. The documents were not prepared by or at the direction of counsel. Heppner generated them independently.
Then Rakoff wrote the sentence that should be part of every enterprise AI policy discussion: "Had counsel directed Heppner to use Claude, Claude might arguably be said to have functioned in a manner akin to a highly trained professional who may act as a lawyer's agent within the protection of the attorney-client privilege."
That is a roadmap. Most enterprise teams are ignoring it.
Warner v. Gilbarco (E.D. Michigan, Feb. 10, 2026)
Seventeen days before Heppner, U.S. Magistrate Judge Anthony Patti reached a different conclusion in an employment discrimination case. A pro se plaintiff had used ChatGPT to develop her litigation strategy. The court found those exchanges potentially protected as attorney work product — because she was directing the AI toward deliberate legal strategy and functioning as her own counsel.
The distinguishing factor had nothing to do with which AI tool she used. It was supervision, purpose, and the nature of the interaction.
The M&A exposure
If you think this only applies to fraud defendants, read what Mayer Brown published March 10th:
"The rise of AI use in deal processes, whether for analyzing term sheets, summarizing due diligence findings, or identifying mark-up issues, creates an emerging category of potentially discoverable evidence. No practitioner wants their or their clients' prompts, threads or other content to be used against their clients in earnout litigation, breach claims or other post-closing disputes." (JD Supra / Mayer Brown)
An M&A team feeding a confidential acquisition target's documents into consumer ChatGPT — generating deal risk summaries, working through term sheet angles, analyzing due diligence findings — may be creating discoverable evidence for post-closing disputes. The governing document is not the internal NDA. It is the vendor's privacy policy.
This is the same structural failure pattern we traced in the missing layer of enterprise AI security: tools deployed fast, governance built later, consequences discovered in court.
What enterprise governance actually looks like
Rakoff's roadmap has four conditions. They are not complicated. They are, however, incompatible with every consumer AI tool on the market:
1. Directed by counsel. AI use must be initiated and supervised by qualified legal counsel. A compliance team using a consumer AI assistant to analyze regulatory exposure without attorney oversight is in Heppner territory, regardless of how good the AI is at the task.
2. Contractual confidentiality. The vendor agreement must explicitly prohibit the AI provider from using client data for model training, disclosing inputs or outputs to third parties, or retaining session data. Consumer terms do not offer this. Enterprise agreements with legal obligations and liability exposure are required.
3. No third-party data disclosure. The provider cannot share inputs or outputs with government authorities under the governing agreement. Anthropic's consumer privacy policy fails this by design. Consumer-tier AI from every major provider fails this condition.
4. Grounded in counsel's legal strategy. AI usage must be documentable as counsel-directed and connected to a legal strategy — not just an employee going off-script with a chatbot.
Enterprise RAG platforms built for compliance — where vendor agreements carry explicit data sovereignty commitments — satisfy conditions two and three structurally. The knowledge base has to be accurate. It also has to be governed against external disclosure. Those are different requirements, and until Heppner, most enterprise teams were only thinking about the first one.
Mojar's enterprise agreements include explicit prohibitions on using client documents for model training and on third-party data disclosure. That structure was always necessary for regulated industries. After Heppner, it is necessary for any legal or compliance function using AI.
The week this ruling dropped, the FTC published its own AI governance guidelines, drawing lines around AI accuracy and organizational accountability. The regulatory direction is consistent: consumer AI in enterprise legal and compliance contexts carries compounding exposure, and that exposure runs through the vendor contract, not the internal usage policy.
The closer
The Heppner ruling is early doctrine in an area courts are still working out. There will be conflicting decisions. The law will move. But enterprises cannot wait for settled case law before deciding which AI tools their legal and compliance teams are allowed to use.
The question is not: can our AI handle legal work?
The question is: when the government asks our AI vendor for those conversations, what does their privacy policy say?
If you don't know the answer offhand, you have your answer.
Sources: Insurance Business America | JD Supra / Mayer Brown | Bloomberg Law | National Law Review | Compliance Week
Frequently Asked Questions
Not automatically. The Heppner ruling (SDNY, Feb. 27, 2026) found that consumer AI interactions are discoverable because commercial privacy policies permit disclosure to government authorities. Protection requires attorney supervision, contractual confidentiality guarantees, and documented legal strategy direction — conditions consumer AI tools don't satisfy.
United States v. Heppner (SDNY 2026): a former CEO used consumer Claude after receiving a grand jury subpoena. He generated 31 documents analyzing his legal exposure. Judge Jed Rakoff ordered full disclosure — because Anthropic's privacy policy permits third-party disclosure, Claude disclaims legal advice, and no attorney supervised the interaction.
The court indicated protection requires four conditions: AI directed by legal counsel, an enterprise vendor agreement that prohibits data disclosure and model training on client inputs, no third-party data sharing, and usage grounded in counsel's legal strategy. Consumer tier AI tools fail conditions two and three by design.
M&A teams feeding confidential documents into consumer AI — term sheet analysis, due diligence summaries, deal risk reviews — generate potentially discoverable evidence. Post-closing disputes, earnout litigation, and breach claims can draw on those AI sessions. The vendor's privacy policy, not the internal NDA, governs.