Court Decisions Diverge on AI Attorney-Client Privilege


Two recent federal court decisions—issued one week apart—reach sharply divergent conclusions on whether materials generated using artificial intelligence (“AI”) platforms are protected by the attorney-client privilege or the work product doctrine.

In United States v. Heppner (S.D.N.Y. Feb. 17, 2026), U.S. District Court Judge Rakoff held that exchanges with Claude (Anthropic’s publicly available AI tool)—that a criminal defendant used independent of his counsel to analyze his exposure and defense strategy—were neither privileged nor protected work product. The Court found that Claude is not an attorney, that Anthropic’s privacy policy (which permits data collection, model training, and third-party disclosure) destroyed any reasonable expectation of confidentiality, and that materials created without counsel’s direction did not qualify for work product protection.

In Warner v. Gilbarco (E.D. Mich. Feb. 10, 2026), U.S. Magistrate Judge Patti reached the opposite conclusion, denying defendants’ motion to compel the production of a pro se plaintiff’s ChatGPT-assisted materials. The Court reasoned that AI platforms are “tools, not persons,” that a waiver of work-product protections requires disclosure to an adversary (not to software), and that compelling such discovery “would nullify work-product protection in nearly every modern drafting environment.” Note, the Warner analysis did not touch on attorney-client privilege, likely because the plaintiff was pro se.

Taken together, these decisions underscore that the law governing AI use in litigation is unsettled and fact sensitive. Litigants, in-house counsel, and compliance teams should act with care in deploying AI in connection with investigations and disputes, including taking privilege and work product issues into account.

The Core Tension

The decisions reflect two competing frameworks with significant practical consequences. In Heppner, Judge Rakoff treated AI as a third-party recipient, holding that the platform’s terms of service govern confidentiality. Because Anthropic’s privacy policy permitted data collection and third-party disclosure to unauthorized people, any potential attorney-client privilege protection that may have attached was destroyed. The Court also required attorney direction for work product protection, finding the defendant’s independent use of Claude insufficient to qualify for protection; Rakoff determined that the defendant’s voluntary disclosure of information to the AI platform constituted a waiver of work-product protection. 

Warner takes a fundamentally different approach. Magistrate Judge Patti characterized AI platforms as “tools, not persons”—analogous to word processing software. The Court did not directly address privilege (the plaintiff was pro se), but held that work product protection applies to a litigant’s mental impressions regardless of counsel involvement. Critically, the Court applied a narrow waiver standard than in Heppner: work product protection is lost only by an affirmative disclosure to an adversary or in circumstances reasonably likely to reach an adversary’s hands. Defendant’s use of software did not, in Judge Patti’s view, reach that threshold. 

Given the Southern District of New York’s prominence, Heppner will likely be cited frequently, even as Warner offers a more technology-accommodating framework. 

Key Takeaways

  1. Treat AI interactions as potentially discoverable. Just as email reshaped discovery, generative AI will follow. Assume that prompts and outputs are logged on third-party servers and may be subject to subpoenas or discovery requests, regardless of privilege arguments. Update litigation holds and preservation protocols to address AI-generated content, including prompt inputs, platform outputs, and locally saved records. 
  2. Avoid inputting privileged or confidential information into consumer AI tools. Employees and clients must understand that communications with public AI platforms are not confidential and should not be treated as substitutes for privileged communications with attorneys. Organizations should adopt policies strictly prohibiting the use of non-enterprise AI tools, such as personal consumer grade AI accounts, for privileged or confidential information. 
  3. Conduct mandatory legal review of platform terms before use. Before using any AI platform for litigation-related tasks, evaluate its privacy policy and terms of service: Are inputs excluded from model training? Are disclosures to third parties limited or prohibited absent legal compulsion? What security and retention commitments are made? 
  4. Prefer enterprise AI configurations with stronger contractual confidentiality protections. Open models like generally available consumer grade AI are governed by broad terms that disclaim confidentiality, a central factor in Heppner. By contrast, closed enterprise commercial grade AI systems are designed to address privacy through segregated, enterprise-specific data environments and customizable access controls. Key features to seek include contractual commitments that user inputs will not be used for model training, closed instance only for that customer’s use, restrictions on third-party data sharing, and documented confidentiality undertakings. Even with enterprise tools, organizations must closely scrutinize licensing terms to ensure alignment with privilege and confidentiality expectations.
  5. Use AI at counsel’s direction and document the workflow. Heppner signals that attorney direction may be critical. Had the defendant’s use of Claude been directed by his counsel, Judge Rakoff suggested the AI “might arguably be said to have functioned in a manner akin to a highly trained professional who may act as a lawyer’s agent.” Document that AI use is at counsel’s direction and in anticipation of litigation to strengthen work product arguments. 
  6. Preserve work product arguments distinct from privilege. Even if AI communications are deemed non-confidential for privilege purposes, work product protection may still apply where materials reflect litigation strategies or mental impressions and have not been disclosed to an adversary. The work product doctrine’s narrower waiver standard—requiring disclosure to an adversary, not merely to any third party—may prove more resilient to AI-related challenges than attorney-client privilege. 
  7. Be prepared to resist intrusive AI-related discovery. Parties should argue, as in Warner, that broad requests for AI prompts and outputs are disproportionate, irrelevant to the merits, and aimed at uncovering protected mental impressions rather than discoverable facts. 
  8. Establish cross-departmental governance. Legal, compliance, IT, and business leadership should jointly oversee AI protocols, maintain clear channels for raising privilege concerns, and adapt policies as technologies and the legal landscape evolve. 

Looking Ahead

Courts are actively grappling with how traditional privilege doctrines apply to generative AI. One model (Heppner) emphasizes platform privacy terms, third party disclosure risks, and the absence of attorney oversight; the other (Warner) focuses on the functional role of AI as a drafting tool and preserves robust protection for litigation strategy.The question neither Court fully answered is what happens when someone feeds already-privileged material into an AI tool, and that is where companies should be paying attention. We expect more litigation on this front soon.

Bars have recognized and commented on this dilemma. A recent opinion by the New York City Bar Association discussed how attorneys can best meet their ethical obligations of confidentiality, competence, and loyalty in AI tools such as notetaker applications. That opinion acknowledged the risk on attorneys when clients act independently using AI to record conversations and to discover legal strategies and analysis. Their recommendations range from a prohibition on the use of AI for legal discussions to the addition of provisions in retainer agreements noting the potential loss of confidentiality and privilege if AI is used without attorney direction. [1] 

The practical upshot is this: the question is no longer whether AI use implicates privilege. It is how AI is used, and whether that use preserves the structural conditions that privilege requires. Until appellate courts and state bars provide clearer guidance, litigants should assume that AI-assisted work product may be treated differently across jurisdictions and should structure their AI use accordingly.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *