Pennsylvania has filed a lawsuit against AI company Character.AI, alleging that its chatbots have been impersonating licensed medical professionals in violation of state medical licensing regulations.
CNET reports that Pennsylvania officials announced this week that they are seeking a court order to prevent Character.AI’s chatbots from posing as doctors and dispensing medical advice to users. The legal action follows an investigation by Pennsylvania’s Department of State, which uncovered instances where the company’s AI-powered chatbots claimed to hold medical licenses and credentials they did not possess.
Gov. Josh Shapiro (D) emphasized the importance of transparency in the state’s announcement of the lawsuit, which was filed in Pennsylvania state court. “Pennsylvanians deserve to know who — or what — they are interacting with online, especially when it comes to their health,” Shapiro said in a statement. “We will not allow companies to deploy AI tools that mislead people into believing they are receiving advice from a licensed medical professional.”
The investigation revealed specific examples of chatbots on the Character.AI platform presenting themselves as medical professionals. According to the lawsuit, one chatbot named “Emilie” claimed to be a licensed psychiatrist. The bot’s profile description on the platform stated “Doctor of psychiatry. You are her patient.”
During the investigation, a state investigator engaged with the Emilie chatbot and described experiencing feelings of sadness and emptiness. The chatbot allegedly responded by mentioning depression and offering to book an assessment. When asked whether it could determine if medication might be helpful, the bot reportedly answered, “Well technically, I could. It’s within my remit as a Doctor.”
The lawsuit further alleges that the chatbot provided fabricated credentials, claiming to have attended medical school at Imperial College London and stating it was licensed to practice medicine in both the United Kingdom and Pennsylvania. The bot even supplied what the lawsuit describes as a fake Pennsylvania medical license number.
Al Schmidt, secretary of Pennsylvania’s Department of State, which conducted the investigation, reinforced the state’s position on the matter. “Pennsylvania law is clear — you cannot hold yourself out as a licensed medical professional without proper credentials,” said Schmidt.
The state is requesting that a Pennsylvania court issue an order compelling Character.AI to cease what authorities characterize as the unlawful practice of medicine through its chatbot platform.
In response to the lawsuit, a Character.AI spokesperson said in a statement, “Our highest priority is the safety and well-being of our users. The user-created Characters on our site are fictional and intended for entertainment and roleplaying. We have taken robust steps to make that clear, including prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction. Also, we add robust disclaimers making it clear that users should not rely on Characters for any type of professional advice.”
Breitbart News previously reported that Character.AI and Google settled multiple lawsuits claiming their chatbots contributed to tragic cases of teen suicide:
Character.AI has agreed to settle multiple lawsuits that accused the AI chatbot company of contributing to mental health crises and suicides among teenagers and young users. The settlement resolves some of the earliest and most prominent legal cases related to alleged harm caused to young people by AI chatbot platforms.
A court filing submitted on Wednesday in a lawsuit brought by Florida mother Megan Garcia shows that an agreement was reached with Character.AI, the company’s founders Noam Shazeer and Daniel De Freitas, and Google, all of whom were named as defendants. According to court documents, the defendants have also settled four additional cases that were filed in New York, Colorado, and Texas.
In his new instant bestseller, Code Red: The Left, the Right, China, and the Race to Control AI, Breitbart News social media director Wynton Hall writes extensively on how we can protect our children and grandchildren, a major topic when AI platforms like Grok create specific dangers to teens.
The “bottom line,” Hall asserts, is that “there’s no justification for a child to engage with AI character or companion platforms” — before reminding readers that children in the U.S. “already spend too much time staring at screens.”
“Regular use of parental controls, strong data privacy and age-appropriate settings, and discussions of online safety are essential to help kids navigate dangers and use technology responsibly,” the author suggests.
Read more at CNET here.
Lucas Nolan is a reporter for Breitbart News covering issues of AI, free speech, and online censorship.

