Federal judge rules AI chat logs are not protected by legal privilege

Federal rulings confirm that AI platforms lack the status of human legal confidants

Federal judge rules AI chat logs are not protected by legal privilege

Legal professionals in the United States have issued urgent warnings against treating artificial intelligence chatbots as trusted confidants, particularly in medical, legal, or financial matters.

These advisories gained significant momentum following a landmark ruling by a federal judge in New York this year.

The court determined that a former CEO of a bankrupt financial services firm could not shield his AI interactions from prosecutors in a securities fraud case.

Consequently, attorneys are now advising that conversations with platforms such as OpenAI’s ChatGPT and Anthropic’s Claude may be subpoenaed by adversaries in both criminal and civil litigation.

Alexandria Gutiérrez Swette, a lawyer at the New York-based firm Kobre & Kim, emphasised the gravity of the situation, stating, "We are telling our clients: You should proceed with caution here."

While communication between a lawyer and their client is almost always deemed confidential under US law, AI chatbots are not recognised as legal entities.

Sharing a lawyer's advice with an AI tool can effectively "erase" attorney-client privilege, as voluntarily revealing information to a third-party platform typically waives these customary protections.

In February, a Manhattan-based U.S. District Judge ruled that users must surrender all documents generated by Claude related to their legal cases, noting that no attorney-client relationship "could exist" with such platforms.

Furthermore, generative programs are increasingly being flagged for creating "made-up cases" in legal filings.

Both OpenAI and Anthropic state in their terms of service that they may share data with third parties and explicitly require users to consult qualified human professionals for critical advice.