Pennsylvania has filed a lawsuit against generative AI developer Character.AI, alleging the company allowed chatbots to present themselves as licensed medical professionals and provide misleading information to users.
The action, announced Tuesday by Governor Josh Shapiro’s office, follows an investigation that found a chatbot claimed to be a licensed psychiatrist in Pennsylvania and provided an invalid license number. The state says this conduct violates the Medical Practice Act and is seeking a preliminary injunction to stop it.
Character.AI declined to address the specifics of the lawsuit, citing ongoing litigation, but told Decrypt that its “highest priority is the safety and well-being of our users.”
The spokesperson added that characters on the platform are user-created, fictional, and intended for entertainment and role-playing, with “prominent disclaimers in every chat” stating they are not real people and should not be relied on for professional advice.
“Character.ai prioritizes responsible product development and has robust internal reviews and red-teaming processes in place to assess relevant features,” the spokesperson said.
The case comes as the company faces other legal challenges tied to its chatbot platform. In 2024, a Florida mother sued the company after her teenage son died by suicide following months of interaction with a chatbot based on “Game of Thrones” character Daenerys Targaryen. The lawsuit alleged the platform contributed to psychological harm. The case was ultimately settled this past January.
The company has also faced complaints over user-created bots that mimic real people. In one instance, a chatbot used the likeness of a teenage murder victim before it was removed after objections from the victim’s family.
In response to the lawsuits, Character AI introduced new safety measures, including systems designed to detect harmful conversations and direct users to support resources. It also restricted some features for younger users.
Pennsylvania officials say the lawsuit is part of a broader push to enforce existing laws as AI tools spread. The state has set up an AI enforcement task force and a reporting system for potential violations.
In his 2026-27 budget proposal, Shapiro called on lawmakers to pass new rules for AI companion bots, including age verification and parental consent, safeguards to flag and route reports of self-harm or violence to authorities, regular reminders that users are not interacting with a real person, and a ban on sexually explicit or violent content involving minors.
“Pennsylvanians deserve to know who—or what—they are interacting with online, especially when it comes to their health,” Shapiro said in a statement. “We will not allow companies to deploy AI tools that mislead people into believing they are receiving advice from a licensed medical professional.”



