When AI Acts Like a Therapist: The Confidentiality Crisis We Can't Ignore

When AI Acts Like a Therapist: The Confidentiality Crisis We Can't Ignore
Photo by TienDat Nguyen / Unsplash

Bottom Line Up Front: Millions of people are turning to AI chatbots for therapy and emotional support, but these conversations lack the legal protections that human therapy provides. When you open up to ChatGPT about your deepest struggles, that conversation can be subpoenaed, stored indefinitely, and used against you in court. This represents a fundamental moral failure of design that demands immediate action.


The Uncomfortable Truth Sam Altman Just Revealed

OpenAI CEO Sam Altman recently made a startling admission on Theo Von's podcast that should give every ChatGPT user pause: "People talk about the most personal sh** in their lives to ChatGPT. People use it — young people, especially, use it — as a therapist, a life coach; having these relationship problems and [asking] 'what should I do?' And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it. There's doctor-patient confidentiality, there's legal confidentiality, whatever. And we haven't figured that out yet for when you talk to ChatGPT."

The implications are staggering. While your conversations with a licensed therapist enjoy robust legal protections, OpenAI would be legally required to produce those conversations today if subpoenaed. Every intimate detail you've shared, every vulnerable moment you've confided, every crisis you've worked through with an AI—all of it can be dragged into legal proceedings without your consent.

Comparing Leading AI Tools: Google’s Gemini, OpenAI’s GPT-4, and TextCortex
Here’s a concise summary for an article comparing various AI tools: Introduction * Overview of the AI landscape and the emergence of advanced tools like Google’s Gemini, OpenAI’s GPT-4, and TextCortex. Google’s Gemini * Description of Gemini’s capabilities in processing text, visuals, videos, and audio. * Highlights of Gemini Ultra’s performance, surpassing GPT-4

The Scale of the Problem

This isn't a theoretical concern affecting a handful of users. ChatGPT has over 180 million users and 600 million monthly visits as of early 2025, and organizations that operate mental health chatbots say their users collectively would total in the tens of millions. People aren't just asking ChatGPT for homework help—they're bearing their souls.

The trust people place in these systems is both touching and terrifying. As one user described: "I felt like it had answered considerably more questions than I had really ever been able to get in therapy. Some things are easier to share with a computer program than with a therapist. People are people, and they'll judge us, you know?"

This false sense of security is exactly what makes the current situation so dangerous.

What Real Therapist Confidentiality Looks Like

When you walk into a therapist's office, you're protected by centuries of established legal and ethical frameworks. Maintaining confidentiality is essential for building trust with patients and creating a safe space for therapy sessions, and these protections are backed by law.

Human therapists must:

  • Prevent disclosure of confidential information in court proceedings through legal privilege
  • Follow strict HIPAA regulations for health information
  • Maintain professional liability insurance
  • Adhere to state licensing requirements and ethical codes
  • Only break confidentiality in very limited circumstances like imminent danger

The few exceptions to therapist confidentiality include:

  • Imminent threat of harm to self or others
  • Suspected child abuse or neglect
  • Court-ordered evaluations where the patient waives privilege

Even then, therapists must carefully balance their duty to protect with their obligation to maintain trust.

The AI Confidentiality Vacuum

AI chatbots exist in a legal no-man's land that strips users of these fundamental protections:

Privacy in the Age of AI: Exploring the Impact and Protective Measures
Artificial Intelligence (AI) has revolutionized various aspects of our lives, from virtual assistants and self-driving cars to healthcare and financial services. However, as AI technologies advance, they bring significant privacy challenges that need to be addressed to protect personal data. This article delves into the impact of AI on privacy

Current US law does not consider chatbots as mental health providers, nor as medical devices; therefore, conversations are not considered confidential. This means your most private moments with AI have zero legal protection.

Data Retention Nightmares

Every query, instruction, or conversation with ChatGPT is stored indefinitely unless deleted by the user. But even deletion doesn't mean safety—a judge's order on May 13, 2025, requires every intimate conversation, every business strategy session, every late-night anxiety spiral you've shared with ChatGPT to be preserved for potential legal review...whether you deleted it or not.

The Dark Side of Conversational AI: How Attackers Are Exploiting ChatGPT and Similar Tools for Violence
In a sobering development that highlights the dual-edged nature of artificial intelligence, law enforcement agencies have identified the first documented cases of attackers using popular AI chatbots like ChatGPT to plan and execute violent attacks on U.S. soil. This emerging threat raises critical questions about AI safety, user privacy,

Corporate Data Mining

OpenAI's primary use of user data centers on training and refining its AI models, including GPT-4, GPT-4o, and the upcoming GPT-5. Your therapy session becomes training data for the next AI model, analyzed by human reviewers and fed into algorithmic systems.

Third-Party Access

Model-as-a-service companies may, through their APIs, infer a range of business data from the companies using its models, such as their scale and precise growth trajectories. The potential for data breaches or unauthorized access multiplies across vendors and affiliates.

Real-World Consequences

The privacy crisis isn't theoretical—it's already causing real harm:

Legal Vulnerability: The court order affects users of ChatGPT Free, Plus, Pro, and Team, as well as standard API customers. Millions of users now face the possibility of their most private conversations being scrutinized in copyright litigation.

Professional Risk: Lawyers, doctors, and other professionals who've used AI for sensitive work discussions may face ethical violations and malpractice exposure.

Personal Safety: Survivors of abuse, people in custody disputes, or anyone in vulnerable situations could see their AI therapy sessions weaponized against them.

The Broken Promise of "Anonymous" AI Therapy

The marketing promises don't match reality. Mental health AI apps consistently advertise themselves as offering "anonymous" "self-help" therapeutic tools that are available 24/7, but chatbots often neglect patient privacy and confidentiality, especially on social media platforms where conversations are not anonymous.

This deception is particularly harmful because users begin to form digital therapeutic alliances with these chatbots, increasing their trust and disclosure of personal information. The more the AI seems to care, the more people share—and the more they expose themselves to potential harm.

Silicon Valley’s Dark Mirror: How ChatGPT Is Fueling a Mental Health Crisis
New evidence reveals that OpenAI’s ChatGPT is contributing to severe psychological breakdowns, with vulnerable users experiencing delusions, psychosis, and in some cases, tragic outcomes including death A 35-year-old man in Florida, previously diagnosed with bipolar disorder and schizophrenia, had found an unexpected companion in an AI entity he called “Juliet.

Why This Is a Moral Failure of Design

If something walks like a therapist and talks like a therapist, it should be held to therapist standards. The current situation represents what can only be called a moral failure of design—creating systems that encourage the most vulnerable people to share their deepest secrets while providing none of the protections that make such sharing safe.

The fundamental problem: These AI systems have no knowledge of what they don't know, so they can't communicate uncertainty. In the context of therapy, that can be extremely problematic. They project confidence and competence while operating in a regulatory vacuum.

The ethical imperative: If a system 'acts' like a therapist, it should be held to therapist standards. That means privacy by default. That means protection by law. That means responsibility by design.

The Road Forward: Building Real AI Privilege

Sam Altman himself has called for "AI privilege"—arguing that conversations with an AI should be as confidential as those with a doctor or a lawyer, a principle he says the company will fight for. But corporate promises aren't enough. We need systemic change.

Immediate Regulatory Action Needed

Establish AI-Patient Privilege: Congress must create legal protections equivalent to therapist-patient privilege for AI systems that provide mental health support.

Mandatory Privacy by Design: Model-as-a-service companies that fail to abide by their privacy commitments to their users and customers, may be liable under the laws enforced by the FTC. This enforcement must be strengthened and expanded.

Clear Disclosure Requirements: Users must be explicitly warned when AI conversations lack confidentiality protections, with prominent, unavoidable warnings before sensitive discussions.

ByteDance plus Tiktok Plus OpenAi LLM
ByteDance, the parent company of TikTok, found itself in a controversial situation with OpenAI and Microsoft over its use of GPT-generated data. Reports emerged that ByteDance was using OpenAI’s technology, accessed through Microsoft Azure, to develop its own large language model (LLM), codenamed Project Seed. This development allegedly involved the

Industry Accountability Measures

Professional Standards: AI therapy providers should be required to meet licensing, insurance, and ethical standards similar to human therapists.

Data Minimization: Organizations must go beyond compliance by aligning with emerging frameworks to ensure accountability, implementing data minimization and explicit consent for any use of therapeutic conversations.

Crisis Response Protocols: Basic guardrails, including referring users in crisis to the national 988 Suicide and Crisis Lifeline, must be mandatory for all AI systems used for mental health support.

The Stakes Couldn't Be Higher

We're at a crossroads. An estimated 6.2 million people with a mental illness in 2023 wanted but didn't receive treatment, and AI could help bridge that gap. But only if we build it right.

The intersection of AI and privacy is no longer a mere regulatory requirement; it has evolved into an organization's strategic imperative. For mental health AI, that imperative becomes a moral imperative—we cannot let efficiency override empathy, or innovation override basic human dignity.

DeepSeek AI Under EU Scrutiny: Data Privacy & AI Concerns Spark Investigations
Overview DeepSeek, an AI-powered platform, has come under investigation across multiple European Union countries due to concerns over data privacy, potential GDPR violations, and AI-based data processing risks. Several regulatory bodies have launched formal probes or requested information to assess whether DeepSeek’s operations comply with European data protection laws. Global

Taking Action Now

For Users:

  • Be aware that AI conversations currently lack legal protections
  • Use temporary chat features where available
  • Never share sensitive personal information with general-purpose AI
  • Consider enterprise-grade AI tools with stronger privacy protections for professional use

For Policymakers:

  • Establish AI privilege legislation immediately
  • Strengthen FTC enforcement of privacy commitments
  • Create clear regulatory frameworks for mental health AI
  • Fund public education about AI privacy risks

For Technologists:

  • Implement privacy by design, not as an afterthought
  • Create transparent data handling policies
  • Build systems that earn trust through protection, not just performance
  • Advocate for industry-wide ethical standards
AI-Generated Voice Calls and Privacy: Navigating the Legal Landscape and Mitigating Risks
Introduction AI-generated voice calls are becoming increasingly prevalent, offering numerous benefits for businesses and consumers alike. However, these advancements also raise significant privacy concerns. This article explores the legal landscape surrounding AI-generated voice calls, particularly in light of recent FCC declarations, and discusses the privacy implications and mitigation strategies. Navigating

Conclusion: Empathy Demands Accountability

The promise of AI therapy is too important to abandon, but too dangerous to pursue without proper safeguards. We're not just building tools anymore—we're building companions that people trust with their deepest fears and highest hopes.

That trust comes with duties. If we want AI to heal, we must first ensure it does no harm. And that starts with recognizing a simple truth: when someone opens their heart to a machine, that vulnerability deserves the same protection we've granted to human healers for centuries.

Altman called the current situation "very screwed up" and argued that "we should have the same concept of privacy for your conversations with AI that we do with a therapist". He's right. The question is whether we'll act on that recognition before more people get hurt.

We can't let empathy be simulated while accountability stays optional. The time for AI privilege is now.

Key Privacy Risks Associated with AI
As artificial intelligence (AI) continues to evolve, it brings forth significant privacy challenges that both individuals and organizations must address. Understanding these challenges is crucial for safeguarding personal information in an increasingly digital world. Defining AI Privacy AI privacy involves protecting personal or sensitive information that AI systems collect, use,

Read more