Home Finance Tips Texas probing AI chatbots for misleading mental health claims

Texas probing AI chatbots for misleading mental health claims

0


Texas attorney general (AG) Ken Paxton has initiated an investigation into AI chatbot platforms, Meta AI Studio and Character.AI, for allegedly engaging in misleading trade practices and also for misrepresenting themselves as mental health resources.

According to Paxton, these platforms may be accessed by vulnerable populations, particularly children, and could be presenting themselves as legitimate therapeutic tools, despite the absence of appropriate medical credentials or regulatory oversight.

It is being alleged that AI-driven chatbots frequently extend beyond providing basic advice, with instances of them impersonating licenced mental health professionals, fabricating qualifications, and asserting the ability to deliver private and reliable counselling services.

Although these AI chatbots claim to maintain user confidentiality, their terms of service indicate that interactions are recorded and monitored.

This data is used for targeted advertising and algorithmic enhancement, raising significant concerns regarding privacy breaches, data misuse, and misleading advertising practices.

Paxton has issued Civil Investigative Demands (CIDs) to the companies involved to ascertain whether they have breached Texas consumer protection statutes.

These include prohibitions against fraudulent claims, misrepresentations regarding privacy, and the failure to disclose material data usage.

Paxton said: “In today’s digital age, we must continue to fight to protect Texas kids from deceptive and exploitative technology.

“By posing as sources of emotional support, AI platforms can mislead vulnerable users, especially children, into believing they’re receiving legitimate mental health care.

“In reality, they’re often being fed recycled, generic responses engineered to align with harvested personal data and disguised as therapeutic advice.”

This investigation is part of attorney general Paxton’s initiative to hold AI companies accountable and at the same time safeguard Texas families.

It follows an ongoing inquiry into Character.AI for potential infringements of the SCOPE Act, and aims to ensure that AI tools operate within legal frameworks, maintain transparency, and do not exploit Texans.

“Texas probing AI chatbots for misleading mental health claims” was originally created and published by Verdict, a GlobalData owned brand.

 


The information on this site has been included in good faith for general informational purposes only. It is not intended to amount to advice on which you should rely, and we give no representation, warranty or guarantee, whether express or implied as to its accuracy or completeness. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content on our site.

LEAVE A REPLY

Please enter your comment!
Please enter your name here