Well, That's Not Privileged, Stupid
A reality check on AI tools, attorney-client privilege, and why lawyers shouldn't be surprised by OpenAI's recent statements
Sam Altman recently caused a stir when he clarified that conversations with ChatGPT are not protected by attorney-client privilege and can be disclosed pursuant to court orders. The response from both lawyers and the general public has been spirited to say the least. But anyone acting surprised by this revelation hasn't been paying attention to how online tools work or what companies have been telling us all along in their terms of service.
The Current Controversy
Altman's comments came during discussions about AI governance and transparency. He stated plainly that OpenAI, like any technology company, will comply with valid legal process. If a court orders them to produce user data, including chat logs, they'll comply. This isn't some newfound corporate overreach, it's basic legal compliance.
Yet somehow, this has shocked people who have apparently been using ChatGPT to discuss sensitive personal information, financial details, relationship problems, health issues under the assumption that their conversations were somehow magically protected. Whether you're sharing your therapy struggles, asking for help with tax strategies, or seeking advice about a messy divorce, that information can potentially be subpoenaed. Just like your emails! Or your messages!
Attorney-Client Privilege 101
Let's start with the basics. Attorney-client privilege protects confidential communications between lawyers and their clients for the purpose of seeking or providing legal advice. The key elements are:
Communication between attorney and client
Made in confidence (no third parties present)
For the purpose of seeking or providing legal advice
The client is able to claim that all such communication should be kept confidential and may not be disclosed by the attorney to anyone. You still with me? Okay good.
Notice what's missing from what I described in that definition? There is no mention of ChatGPT, Claude, or any other AI tool serving as either the attorney or the client.
A note for Lawyers - If you’re a lawyer and you type your client's confidential information into a public version of ChatGPT, you're not communicating with your client. You're communicating with a third party which in this case would be OpenAI. That third party happens to be a chatbot, but it's still a third party. The moment you hit "send," you've potentially destroyed the confidential nature of that communication. However, if you use an enterprise version of the AI tool, it keeps your data secure and is not sending it to OpenAI’s servers. This ‘walled garden’ approach may help keep your privileged data private to you. Though note that even with enterprise versions, lawyers should carefully review the terms of service and data handling practices of the AI tools. Some jurisdictions may also have issued specific guidance on AI use by attorneys.
The Digital Reality Check
This isn't unique to AI tools. It's true for virtually every digital service we use:
Email providers (Gmail, Outlook, etc.) can be compelled to produce emails
Social media platforms must comply with valid subpoenas for user data
Messaging apps like WhatsApp or Signal may face legal pressure (though some have stronger encryption)
Cloud storage services can be ordered to turn over documents
Banking and financial apps regularly comply with court orders
Therapy and mental health apps have been subpoenaed for user data
Dating apps have turned over user messages in criminal cases
Reading the Fine Print (That I Spend Time Writing)
I know you don’t read the Terms of Service that lawyers painfully put together for your review. I know you look at that pop-up and like brushing off a gnat, hit submit and move on. But here’s the thing buddy. AI platforms' terms of service have always included standard legal disclosure provisions. Let me walk you through what these typically look like across major platforms.
OpenAI's Privacy Policy includes language about complying with legal process, including court orders, subpoenas, and government requests. This is standard language you'll find in virtually every terms of service agreement for any online platform.
Anthropic (Claude's maker) has similar provisions about legal compliance and data disclosure. They say - “We will also provide users notice if their data is requested, unless we believe we're legally prohibited from doing so, or other rare exceptions apply, including where a child is at risk of harm or in cases of emergency.”
Google's Gemini terms include the same types of legal process exceptions.
These are standard operating procedure or any company that stores user data. The legal disclosure exception typically reads something like:
"We may disclose your information if required to do so by law or in response to valid requests by public authorities, including to meet national security or law enforcement requirements."
The Apple Exception (And Why It's Rare)
Now, some companies do occasionally push back against government requests. The most famous example was Apple's 2016 battle with the FBI over unlocking the San Bernardino shooter's iPhone. Apple refused to create a "backdoor" that would have compromised their encryption to jailbreak devices. This ultimately forced the FBI to find another way to access the device.
But here's what's important to understand about that case. Apple was fighting against being compelled to create new technology that would weaken security for all users. They weren't refusing to comply with valid legal process for data they already had access to. If Apple had stored your text messages in readable form on their servers, and received a valid court order, they would have (probably) turned them over.
The Business Reality
From a business perspective, these companies have no choice. They're not going to risk contempt of court charges or regulatory sanctions to protect your chat logs. They're not law firms. They don't have attorney-client privilege with you. They're technology companies that happen to provide a service you find useful.
Moreover, these companies are often subject to various regulatory requirements, government contracts, and international law enforcement cooperation agreements.
What This Means for Everyone
The implications go far beyond legal practice:
Don't share sensitive personal information with AI tools unless you're comfortable with potential disclosure
Understand that "confidential" conversations with AI aren't actually confidential
Read the terms of service for tools you're using, especially for sensitive purposes
Consider the context of what you're sharing. Is this something you'd be comfortable with in a court filing?
Use alternative methods for truly sensitive communications, like calling or chatting live. Here’s a thought - meet in person!
Arguments with Algorithms explores the intersection of AI, law, and professional practice. If you're a lawyer, technologist, or just someone trying to make sense of our AI-integrated world, subscribe for weekly insights on navigating these changes thoughtfully and practically. And if you enjoyed this, please consider becoming a paid member!
Good one, Smita!