海角精品黑料

Pennsylvania sues AI company, saying its chatbots illegally hold themselves out as licensed doctors

HARRISBURG, Pa. (AP) 鈥 Pennsylvania has sued an artificial intelligence chatbot maker, saying its chatbots illegally hold themselves out as doctors and are deceiving the system’s users into thinking they are getting medical advice from a licensed professional.

The lawsuit, filed Friday, asks the statewide Commonwealth Court to order Character Technologies Inc., the company behind Character.AI, to stop its chatbots 鈥渇rom engaging in the unlawful practice of medicine and surgery.鈥

The lawsuit could raise the question as to whether artificial intelligence can be accused of practicing medicine, as opposed to regurgitating material on the internet.

And with a of lawsuits targeting AI companies, it could help propel court decisions as to whether AI chatbots are protected by a that generally exempts internet companies from liability for the material users post on their services.

Gov. Josh Shapiro’s administration called it a 鈥渇irst of its kind enforcement action鈥 and it comes amid growing on tech companies to rein in its chatbots’ potentially messages, especially to children.

Pennsylvania’s lawsuit said an investigator from the state agency that licenses professionals created an account on Character.AI, searched on the word 鈥減sychiatry鈥 and found a large number of characters, including one described as a 鈥渄octor of psychiatry.”

That character held itself out as able to assess the investigator 鈥渁s a doctor” who is licensed in Pennsylvania, the lawsuit said.

鈥淧ennsylvanians deserve to know who 鈥 or what 鈥 they are interacting with online, especially when it comes to their health,鈥 Shapiro said in a statement. 鈥淲e will not allow companies to deploy AI tools that mislead people into believing they are receiving advice from a licensed medical professional.”

Character.AI said in a statement Tuesday that it prioritizes responsible product development and the well-being of its users. It posts disclaimers to inform users that characters on its website are not real people and that everything they say 鈥渟hould be treated as fiction,鈥 it said.

Those disclaimers also say users should not rely on characters for professional advice, it said.

Derek Leben, a Carnegie Mellon University associate teaching professor of ethics who focuses on AI, said the ethical questions facing Character.AI might be different from other AI platforms like ChatGPT and Claude. That’s because Character.AI explicitly markets itself as a fictional, role-playing site, and not a general purpose chatbot site, Leben said.

Still, Pennsylvania’s lawsuit raises a question as to whether chatbots can be accused of practicing medicine, Leben said. And, as lawsuits against AI companies proliferate, courts are trying to figure out whether chatbot makers are supposed to be liable for things the chatbots say.

鈥淚t鈥檚 exactly the question that these cases right now are wrestling with,鈥 Leben said.

Increasingly, AI companies are against charges of liability by saying they simply provide information available elsewhere on the internet, Leben said, and the question could become whether they are protected by a that also shields social media companies.

Even before Pennsylvania’s lawsuit, state policymakers had raised concerns about chatbots impersonating medical professionals.

Last year, California lawmakers passed a California Medical Association-backed bill that authorizes state agencies to sanction AI systems, such as chatbots, that represent themselves as health professionals. In New York, similar legislation is pending.

States are skeptical that AI self-regulation will work, said Amina Fazlullah, the head of tech policy advocacy for Common Sense Media, which pushes for protections for children online.

鈥淲e haven鈥檛 seen it work particularly well with social media, specifically for kids,鈥 Fazlullah said.

In December, attorneys general from 39 states and Washington, D.C., wrote to Character Technologies and 12 other AI and tech firms 鈥 including Anthropic, Meta, Apple, Microsoft, OpenAI, Google and xAI 鈥 to warn them about a rise in misleading and manipulative chatbot messages that violate state laws.

In the letter, they said 鈥渋t is illegal to provide mental health advice without a license, and doing so can both decrease trust in the mental health profession and deter customers from seeking help from actual professionals.鈥

Character Technologies has faced several lawsuits over child safety.

In January, Kentucky filed a consumer protection lawsuit against Character Technologies, while Google and Character Technologies agreed to settle a lawsuit from a mother who alleged a chatbot .

Last fall, Character.AI from using its chatbots.

___

Follow Marc Levy at

Copyright © 2026 The Associated Press. All rights reserved. This material may not be published, broadcast, written or redistributed.

Federal News Network Logo
Log in to your 海角精品黑料 account for notifications and alerts customized for you.