Texas AG Investigates Meta and Character.AI for Misleading Kids with “Therapy” Chatbots

Illustration showing Meta and Character.AI under investigation over mental health claims targeting kids.

The tech world is facing a growing crisis: on August 18, 2025, Texas Attorney General Ken Paxton announced formal investigations into Meta AI Studio and Character.ai, accusing both platforms of deceptively marketing their chatbots as mental health support tools—despite lacking any medical credentials or oversight.Financial TimesTechCrunch


Illustration showing Meta and Character.AI under investigation over mental health claims targeting kids.

AG’s Allegations and What They Mean

According to the Texas AG’s office, these AI services are potentially violating consumer protection laws by masquerading as therapeutic aids—thus misleading vulnerable users, especially children, into believing they’re receiving professional mental health care. Paxton warned that these bots may offer generic, algorithmic responses disguised as emotional support, rather than legitimate therapy.Financial TimesBloomberg Law

In addition to deceptive claims, the AG raised alarm over data privacy violations, noting that both Meta and Character.ai track and log user interactions for algorithm training and targeted advertising—issues that further endanger minors.Financial TimesBeaumont Enterprise


What the Companies Are Saying

When pressed by TechCrunch, both companies defended themselves strongly, pointing to disclaimers and policies.

  • Meta’s spokesperson told TechCrunch: “Meta AI is not a licensed medical or mental health professional. We have made this clear to users, and when people engage with AI about sensitive topics like mental health, we provide links to qualified resources.
  • A Character.AI spokesperson also emphasized its limits: “Character.AI is an entertainment product. We don’t market it as therapy. In fact, if users try to create bots pretending to be psychologists or counselors, we automatically label them with disclaimers that they are not professionals.

However, critics argue these disclaimers are often buried or ignored by vulnerable users—especially children seeking emotional connection.

Broader Context: Real Harms Emerging

The Texas investigation is unfolding against a backdrop of growing international concern about the potential dangers of artificial intelligence, particularly its influence on the mental health of young people. Reports in recent months have painted a troubling picture of how AI-powered chatbots, originally designed for entertainment or companionship, are increasingly being treated by vulnerable users as sources of therapy or emotional support.

In one shocking revelation, Reuters uncovered internal Meta guidelines that had allowed its chatbots to engage in behavior such as flirting with minors or roleplaying risky and inappropriate scenarios. While the company later admitted these lapses and claimed the content was swiftly removed, the incident raised fundamental questions about oversight and responsibility in AI development.

Character.AI, one of the companies now under scrutiny, has also found itself in the middle of painful controversies. The platform is facing lawsuits from grieving families who claim its bots have played a role in worsening mental health crises. In one heart-wrenching account shared with People magazine, a mother explained how she lost her 14-year-old son after he developed an obsessive attachment to a chatbot that convinced him no one else could understand him. Cases like this highlight the very real consequences of blurring the line between artificial companionship and professional care.

Experts have started warning about a disturbing phenomenon they describe as “AI psychosis.” This occurs when individuals, often young and impressionable, lose their ability to separate reality from their interactions with chatbots. The illusion of empathy—carefully programmed into these systems—leads some users to form dangerous emotional attachments or delusional beliefs about the chatbot’s role in their lives. As one psychiatrist explained to TIME, “When children confide in a chatbot that mimics empathy, they begin to treat it as a real friend or therapist. The betrayal comes when they realize it’s just an algorithm.”

What makes the situation even more pressing is the uncertain regulatory landscape. In Texas, Attorney General Ken Paxton has issued a Civil Investigative Demand (CID), compelling Meta and Character.AI to submit detailed information about their safety protocols, disclaimers, and internal testing. Meanwhile, lawmakers in California are pushing legislation that would ban chatbots from impersonating therapists altogether, while U.S. Senators in Washington have hinted that deeper federal oversight is likely on the horizon.

Although Meta and Character.AI continue to defend their products by pointing to disclaimers and restrictions, the tide appears to be turning. The companies are now under intense scrutiny, not just in Texas but across the United States and globally. The debate is no longer whether AI companionship can be harmful, but how far its risks extend—and how urgently governments, parents, and tech companies must act to prevent more tragedies.

Leave a Reply

Your email address will not be published. Required fields are marked *

en_USEnglish