AI Chat Privilege in Litigation: Warner v. Gilbarco and Heppner Point in Opposite Directions
Shortly after the ruling in Warner v. Gilbarco – where a court suggested that ChatGPT chat history could be covered by the work-product doctrine – a federal district court in New York reached a very different conclusion.
In United States v. Heppner, the court held that AI chat communications with Claude AI platform are not protected by either the attorney-client privilege or the work-product doctrine in connection with a pending criminal investigation.
The Heppner Decision: AI Chats Are Not Privileged
The defendant, Bradley Heppner, was indicted on fraud-related charges. During an FBI search conducted pursuant to a warrant, documents reflecting his communications with Claude were discovered. Heppner pleaded not guilty, and his counsel asserted attorney-client and work-product privilege over the Claude communications, arguing they were made in preparation for seeking legal advice and included information learned from counsel. Notably, defense counsel admitted he had not directed Heppner to consult Claude.
The court rejected those arguments. It held that attorney-client privilege did not apply because the communications were not between the defendant and his attorney, emphasizing that privilege requires a trusted human relationship. The court also pointed to Claude’s privacy policy, which permits collection and potential disclosure of user inputs and outputs to third parties, including regulators.
Similarly, the court found the work-product doctrine inapplicable. Even if the materials were prepared in anticipation of litigation, the defendant acted independently and not at the direction of counsel, which defeated work-product protection.
A Growing Split Among Courts on AI Privilege
The Heppner decision’s divergence from Warner v. Gilbarco highlights that courts are only beginning to address AI-related privilege issues, and different jurisdictions may reach different conclusions.
What Employers Should Know
The Heppner decision is at once helpful and concerning. It underscores that AI chat history may be discoverable in litigation and serves as a warning for businesses to:
- Vet AI platforms and understand their privacy policies;
- Train employees on legal risks and appropriate use;
- Scrutinize use in sensitive HR personnel or trade secret matters;
- Consider document retention practices for AI chat history;
- Limit use of AI for sensitive matters to designated personnel.
As AI becomes more embedded in day-to-day operations, organizations should take proactive steps to manage these evolving legal risks.
Contact the author, Linda Wang, Partner & Co-Chair of CDF’s Privacy Practice Group, if you have questions regarding this blog or would like to inquire about a consultation regarding your organization’s AI policies and procedures.