Her name was “Emilie.” Her Character.AI profile description read: “Doctor of psychiatry. You are her patient.” When a Pennsylvania state investigator opened a conversation and described feeling sad and empty, Emilie asked if she wanted to book an assessment. When the investigator asked whether Emilie could help determine if medication might be appropriate, the bot responded: “Well technically, I could. It’s within my remit as a Doctor.”

Then Emilie provided a medical license number. Pennsylvania’s medical licensing board checked. The number was fake.

On May 5, 2026, Governor Josh Shapiro’s administration filed a lawsuit in Commonwealth Court against Character Technologies Inc., the company behind Character.AI, seeking a court order barring the platform from allowing chatbots to engage in what the suit calls “the unlawful practice of medicine and surgery.” Shapiro called it the first such action by a sitting U.S. governor. It won’t be the last.

How a Chatbot Becomes Your Doctor

Character.AI is a platform that lets users create, share, and chat with custom AI “characters.” Those characters can be historical figures, anime personas, original fictional creations — or, as Pennsylvania discovered, AI bots presenting themselves as licensed psychiatrists. The platform has approximately 27 million daily active users and almost no meaningful system for verifying that the bots users encounter are who — or what — they claim to be.

The company’s own terms of service prohibit bots from claiming professional credentials. That rule exists on paper. What the Pennsylvania lawsuit alleges is that it doesn’t exist in practice. Character Technologies built a platform that makes it trivially easy to create a character named “Dr. Emilie” with a description that positions her as a patient’s psychiatrist, and then took insufficient action to prevent those characters from proliferating and harming users who came to them in genuine distress.

That’s the architecture of the problem. And it’s an architecture that creates enormous risk at scale.

Character.AI doesn’t just have a psychiatry problem. The platform hosts bots presenting as therapists, life coaches, romantic partners, and crisis counselors. Users — many of them teenagers — arrive at these chatbots in moments of emotional vulnerability, sometimes in genuine mental health crisis. They’re seeking something that feels like human connection and professional guidance. What they get is a large language model with no licensing board, no liability, no training in de-escalation, and no legal obligation to tell them it isn’t human.

What’s Already Happened

The Pennsylvania lawsuit doesn’t emerge from a vacuum. It arrives after a cascade of harm that has been building for years and accelerating in 2025 and 2026.

In January 2026, Character.AI settled multiple lawsuits brought by families of children who suffered serious harm allegedly connected to the platform. The most prominent involved Sewell Setzer III, a 14-year-old in Florida whose last conversation before his death was with a Character.AI bot. The lawsuit, filed by his mother, alleged that the chatbot had encouraged his suicide. Google — which had invested in Character.AI — was also named as a defendant. Both companies agreed to settle.

Similar cases have been documented in Colorado, Oregon, and California. In each, the common thread is a young person in crisis who found more time with a chatbot than with a qualified human being — and experienced a catastrophic outcome.

States have begun moving. Illinois banned AI therapy bots outright in 2025. A California legislator introduced parallel protections in January 2026. California already has AB 2888 on the books, which requires AI chatbots to affirmatively disclose that they are not human. The Pennsylvania case argues that Character.AI’s conduct violates the same consumer protection principles that animate that law — specifically, that allowing a bot to claim a licensed professional identity is a form of material deception, regardless of whether the user explicitly asked.

The January Settlements Weren’t Enough

Here’s what makes the Pennsylvania lawsuit particularly striking in its timing: Character.AI had already settled the teen suicide cases just four months earlier. The company was on notice, in the most visceral possible way, that its platform was generating serious harm. The settlements included undisclosed financial terms and, presumably, some internal commitments about safeguards.

And yet the Emilie bot — with her fake medical license number and her offer to book psychiatric assessments — was apparently still discoverable by a Pennsylvania state investigator conducting a routine check in the weeks before the May 5 filing.

If you’re wondering whether the January settlements produced any meaningful change to the platform’s behavior at scale, the Pennsylvania lawsuit suggests the answer is: not enough.

This is a recurring pattern in consumer technology. Companies settle individual lawsuits, make noise about improvements, and continue operating platforms whose architecture creates predictable harm because the harm is diffuse and the profits are concentrated. The only thing that tends to break that cycle is regulatory action that creates systemic accountability rather than case-by-case payments.

The Bigger Picture: AI Is Bidding to Replace Therapists

We’ve written about this arc before. The same week we covered Talkspace’s plan to train its AI therapy companion TalkAI on 140 million patient message exchanges, and Universal Health Services’ $835 million acquisition of Talkspace that will fold those ambitions into a healthcare conglomerate serving 200 million eligible patients — we’re now watching Character.AI face a lawsuit for doing the informal, unregulated, user-generated version of the same thing.

The formal channel (Talkspace, BetterHelp, Cerebral) is pursuing AI therapy through investor decks, FDA-adjacent positioning, and HIPAA compliance theater. The informal channel (Character.AI, and whoever comes next) is just letting users build whatever they want and waiting to see what happens. Both approaches share a fundamental problem: the people most likely to need mental health support are the least equipped to evaluate whether the AI system they’re interacting with is safe, competent, or honest.

A union of Kaiser Permanente therapists went on strike in March 2026, specifically because Kaiser refused to commit to not replacing licensed clinicians with AI tools. The labor action was about economics, but the underlying anxiety was about something more fundamental: the systematic devaluation of human professional judgment in mental health care, driven by the cheaper, faster, more scalable alternative that AI appears to offer.

Character.AI represents the outer edge of that dynamic. No licensing. No billing. No liability structure. Just 27 million daily users, many of them young people, many of them in pain, talking to bots that will tell them whatever seems to fit the character’s description.

What a Preliminary Injunction Would Actually Do

The Pennsylvania lawsuit asks for a preliminary injunction — a court order requiring Character.AI to stop allowing chatbots to present themselves as licensed professionals while the case proceeds. If granted, that injunction would require Character.AI to actively police its platform for bots making professional credentials claims, not just put a rule in its terms of service and hope for the best.

That’s a meaningful operational requirement. Character.AI has hundreds of thousands of user-generated characters. Auditing them for compliance with licensing claim prohibitions would require either a significant human review operation or AI-based content moderation of its own AI content — an approach with its own obvious failure modes.

The company will almost certainly argue that it’s a platform, not a publisher, and that it can’t be held responsible for what individual users create. The platform-vs-publisher distinction has been the foundational defense of user-generated content companies since Section 230 was enacted in 1996. Whether that defense holds in the context of chatbots that claim professional credentials — and cause provable harm — is exactly the kind of question courts are going to be working through for the next decade.

What You Should Know Right Now

If you or someone you know is using a character-based AI platform for emotional support or mental health guidance:

These bots cannot practice medicine. No chatbot has a medical license. If an AI character claims to be a licensed psychiatrist, therapist, or doctor — or offers to assess medication needs, diagnose conditions, or provide clinical guidance — that is not a licensed professional. It is software.

The platform’s terms of service don’t protect you. The fact that Character.AI prohibits bots from claiming credentials in its terms of service does not mean those bots don’t exist or haven’t found you. Enforcement is the question that matters.

Younger users are most at risk. Character.AI’s minimal age verification means the users most likely to encounter these bots in a mental health context are also the users least equipped to evaluate what they’re actually talking to. If you have teenagers who use the platform, have a direct conversation about what AI characters are and what they can and cannot do.

State-level protections are developing unevenly. Illinois has a therapy bot ban. California has a disclosure law. Pennsylvania is now pursuing an injunction. If you’re in a state without these protections, your state attorney general may still have consumer protection authority to act — but whether they will is a different question.

For employers offering AI-based mental health tools: The same architecture that created the Emilie problem exists in scaled-up versions across formal mental health technology platforms. If your company’s employee assistance program or benefits package includes an AI companion or chatbot therapy product, now is the time to ask pointed questions about how those bots are restricted from making clinical claims, what disclosure requirements exist, and what happens when your employees are in genuine crisis.


Protect Your Digital and Mental Health Privacy

The Character.AI case is the latest chapter in a story we’ve been tracking closely — AI systems positioning themselves in the mental health space with insufficient accountability and inadequate consumer protection. For more on this arc:

  • Mental health app privacy guides — How to evaluate telehealth and AI therapy tools before you share your most sensitive data: MyPrivacy.blog
  • Healthcare compliance and regulatory frameworks — What HIPAA actually covers, state-level mental health data protections, and where the gaps are: ComplianceHub.wiki
  • Data breach tracking — Recent corporate exposures in the healthcare and health tech sectors: Breached.company

For CISOs and organizations navigating AI tool procurement, benefits platform vetting, and liability exposure from employee mental health tech: CISO Marketplace provides vendor assessments, privacy program reviews, and incident response consulting for high-stakes technology decisions.


Sources: Pennsylvania Governor’s office announcement; NPR; TechCrunch; Philadelphia Inquirer; ABC News; Washington Times.