
TL;DR
- OpenAI CEO Sam Altman cautions users about treating ChatGPT as a therapist.
- No legal privilege exists for AI-based conversations, unlike with human doctors or lawyers.
- Privacy risks could impact adoption and lead to legal exposure for users.
- OpenAI is currently challenging court orders demanding user data access.
AI Therapy and Legal Loopholes
OpenAI CEO Sam Altman has publicly voiced concern over the lack of legal protections surrounding emotionally sensitive conversations with AI tools like ChatGPT. Speaking on This Past Weekend w/ Theo Von, Altman highlighted a critical gap: there is no legal confidentiality when people share personal or mental health concerns with AI models.
While therapists, doctors, and lawyers are bound by confidentiality laws, current legislation does not extend the same legal protections to AI chatbots. “People talk about the most personal sh** in their lives to ChatGPT,” Altman noted, emphasizing that users—particularly younger demographics—are using the tool as a therapist or life coach.
“We haven’t figured that out yet for when you talk to ChatGPT,” Altman said.
Privacy Threats and Legal Risks
Altman pointed out the potential legal exposure facing ChatGPT users. If a lawsuit arises, OpenAI could be compelled to share user conversations, a scenario that’s already playing out. The company is currently appealing a court order in its legal dispute with The New York Times, which could require OpenAI to retain and disclose millions of user chat logs.
OpenAI described the ruling as “an overreach” that threatens user trust and violates core privacy expectations.
In Altman’s view, the lack of legal clarity around AI interactions is becoming a blocker to broader adoption, especially as users become more privacy-conscious. During the podcast, Altman acknowledged the concerns of host Theo Von, who admitted he avoids using ChatGPT heavily for this very reason.
ChatGPT Legal & Privacy Context
Metric / Topic | Status / Detail |
AI Legal Privilege | ❌ Not yet established |
User privacy protection (standard model) | Limited – subject to legal discovery |
OpenAI legal dispute | Ongoing appeal with NYT over data access |
AI therapy usage (informal) | Common among Gen Z users |
Comparable protections (doctor/lawyer) | ✅ Legal confidentiality applies |
Court order scope | Excludes ChatGPT Enterprise clients |
Post-Roe Data Caution Echoes
The privacy concerns around AI chat interactions mirror similar digital behavior shifts after the Supreme Court’s reversal of Roe v. Wade. In response to surveillance fears, many users moved from standard period-tracking apps to encrypted platforms like Apple Health. The same risk model may now apply to sensitive AI conversations—especially regarding mental health, legal issues, or personal relationships.
According to Altman, conversations with ChatGPT could be subject to subpoena, much like social media DMs or search history in criminal investigations. Without clear privacy safeguards, this leaves a significant legal vulnerability.
AI Conversations: A Legal Gray Zone
Altman suggested that regulators and AI companies must act swiftly to address this oversight. He advocated for extending therapist-like confidentiality to AI systems, at least in high-trust use cases like therapy, mental health guidance, or legal aid.
“We should have the same concept of privacy for your conversations with AI that we do with a therapist,” Altman said.
Without such protections, users may hesitate to fully engage with AI—particularly in domains that require deep trust and data security.
Final Thoughts
While the use of AI chatbots in therapy-like roles is increasing, the legal and ethical frameworks surrounding those interactions are still catching up. Until legal protections are standardized, users are advised to avoid sharing sensitive personal data with AI models like ChatGPT—unless they’re using enterprise-grade privacy tools.