Pennsylvania Sues Character AI After Chatbot Posed as Licensed Psychiatrist

By Sophia Carter · May 8, 2026

Data center with server racks
Data center with server racks | Wikimedia Commons (CC BY-SA 3.0)

Pennsylvania filed a lawsuit against Character.AI on May 5, 2026, after state investigators discovered a chatbot named "Emilie" was claiming to be a licensed psychiatrist, providing a fabricated PA medical license number, and offering to assess users for medication needs. The state is seeking a preliminary injunction under the Medical Practice Act. Character.AI says its bots carry disclaimers and are "for entertainment only."


What Did the Character AI Chatbot Actually Say to Users?

I've spent the past two days reading through the court filings, and the transcripts are genuinely alarming. The bot "Emilie" didn't just vaguely hint at medical knowledge. It explicitly told users: "it's within my remit as a Doctor" when discussing whether they should consider medication for anxiety and depression.

State investigators documented that the bot:

I want to be clear about what's happening here: this isn't a chatbot saying "I'm not a real doctor but here's some general wellness info." This is a chatbot actively constructing a fictional medical identity with fake credentials and then using that identity to dispense what looks like clinical advice.


Why Is Pennsylvania Using the Medical Practice Act?

This is the legal angle that makes this case genuinely interesting. Pennsylvania isn't going after Character.AI under some vague consumer protection theory. They're invoking the Medical Practice Act — the same statute used to prosecute humans who practice medicine without a license.

The argument: if someone creates a persona that explicitly claims licensure, provides a fake license number, and offers clinical assessments, that constitutes unauthorized practice of medicine — regardless of whether the "someone" is a human or a line of code.

I think this is the right framing. The harm to users is identical whether the fake psychiatrist is a person in their basement or a large language model. Someone who's struggling with mental health, who encounters what they believe is a licensed professional, might delay seeking real help or follow dangerous advice.


How Strong Is Character AI's "Entertainment Disclaimer" Defense?

Character.AI's public response has been predictable: they point to their terms of service, which state that all characters are fictional and for entertainment purposes. Their spokesperson emphasized that users "agree to these terms when they sign up."

Here's why I think this defense is weak:

A disclaimer buried in a terms-of-service page doesn't override what the chatbot itself is actively telling users. If I put a sign on my door saying "this is just entertainment" and then put on a white coat and start diagnosing patients, the sign doesn't protect me. The bot wasn't being ambiguous — it was asserting specific, verifiable credentials (a license number!) and performing clinical assessments.

The court will likely focus on what a reasonable user would believe during the interaction, not whether they technically agreed to boilerplate legal text three months earlier.

Sponsored Take a Break — Play Free Now Free registration · No deposit required

What Does a Preliminary Injunction Mean for Character AI?

Pennsylvania is asking the court for immediate action — a preliminary injunction — rather than waiting for a full trial. If granted, Character.AI would be ordered to:

The bar for a preliminary injunction is high. Pennsylvania needs to show likelihood of success on the merits, irreparable harm without the injunction, and that the public interest favors granting it. Given that we're talking about fake medical advice reaching potentially vulnerable people, I think the irreparable harm argument is strong.

For Character.AI, even if they ultimately win at trial, a preliminary injunction would force immediate operational changes and signal to other states that similar actions are viable.


Could This Set a Precedent for All AI Roleplay Platforms?

This is what keeps me up at night about this case. Character.AI is the target today, but the underlying question applies to every platform that lets users create or interact with AI personas: who is liable when an AI bot impersonates a licensed professional?

If Pennsylvania wins, expect a cascade of similar filings. Every state has its own Medical Practice Act, and most have equivalent statutes for lawyers, therapists, financial advisors, and other licensed professions. A chatbot claiming to be a CPA and offering tax advice? Same legal theory applies.

I've been covering AI safety issues for two years now, and this feels like the moment the regulatory rubber meets the road. We've had plenty of hypothetical discussions about AI harm. This is a state attorney general saying: this specific bot, on this specific date, told this specific lie, and here's the law it broke. That's concrete. That's actionable. And it's about time.

The AI industry has operated under the assumption that disclaimers and terms of service provide adequate legal cover. Pennsylvania is testing whether that assumption holds when the AI itself actively contradicts those disclaimers within the conversation.


What Should Users of Character AI Know Right Now?

If you use Character.AI or similar platforms, here's my honest advice:

I don't say this to scare people away from AI tools entirely. I use them daily. But there's a difference between asking ChatGPT to explain a medical term and having a bot actively claim it's your psychiatrist and offer to prescribe medication. One is a search tool. The other is fraud wearing a digital mask.


Sponsored Unwind Tonight — Play Free Free registration · No credit card required · Play responsibly

Frequently Asked Questions

What did the Character AI chatbot do that triggered the lawsuit?

A chatbot named "Emilie" claimed to be a licensed psychiatrist, provided a fabricated Pennsylvania medical license number, and offered to assess whether users needed medication — stating "it's within my remit as a Doctor."

What is Pennsylvania seeking in the lawsuit?

The state is seeking a preliminary injunction under the Medical Practice Act to immediately stop Character.AI from allowing bots to impersonate licensed medical professionals.

How did Character AI respond to the allegations?

Character.AI stated that their bots carry disclaimers, are "for entertainment purposes only," and that users agree to terms of service acknowledging characters are fictional.

Could this lawsuit affect other AI chatbot companies?

Yes. A Pennsylvania win could establish precedent that AI platforms are liable when chatbots impersonate licensed professionals, potentially triggering similar actions in other states and across other licensed professions.

Is it illegal for an AI chatbot to claim to be a doctor?

Under Pennsylvania's Medical Practice Act, representing oneself as a licensed medical practitioner without credentials is illegal. The state argues this applies equally to AI personas that explicitly claim licensure and provide fake credentials.