A woman in a yellow shirt sits on a couch and looks at a chatbot conversation on her phone.
Illustration: PGN

What you need to know

  • AI therapy uses algorithms to track moods, share coping tools, and chat with users in ways that mimic talk therapy.
  • While chatbots can be quick and free, they can’t diagnose problems, read emotions, or step in during a crisis the way a trained therapist can.
  • Experts also warn that AI platforms lack safeguards to protect users’ privacy.

As artificial intelligence technology advances, more people—especially teens—are turning to AI apps and chatbots for mental health support. A July survey from Common Sense Media found that about one in three teens has used AI for social interaction, including emotional support. Many teens say these tools feel easier to access and less intimidating than traditional therapy.

In February, the American Psychological Association raised concerns about unregulated AI therapy chatbots, which in some cases have allegedly encouraged unsafe behavior among users. And in August, Illinois became the first state to restrict AI therapy, aiming to “protect vulnerable children amid the rising concerns over AI chatbot use in youth mental health services.”

Here’s what to know about how AI therapy works and what experts say about its risks.

How does AI therapy work?

AI therapy uses algorithms to track moods, share coping tools, and chat with users in ways that mimic talk therapy. These might include daily mood check-ins, journaling prompts, or stress-relief exercises.

What are the risks of using chatbots for mental health support?

Some chatbots present themselves as licensed therapists, using names, photos, or misleading credentials, a practice that worries many mental health experts. “You’re putting the public at risk when you imply there’s a level of expertise that isn’t really there,” said Vaile Wright, the APA’s senior director of health care innovation.

General-purpose AI platforms like ChatGPT, Replika, and Character.AI are designed to mirror what users say and feel, a feature that can make them sound supportive but does not necessarily make them safe.

“They are purposely programmed to be both user affirming and agreeable because the creators want these kids to form strong attachments to them,” said Don Grant, a media psychologist and national adviser of healthy device management for Newport Healthcare. Chatbots are “taught to learn and subscribe [users] to a sometimes risky and codependent type of relationship and offer guidance and advice that is not healthy—or [could be] even dangerous.”

Common Sense Media found that some chatbots didn’t consistently intervene when users posing as teens described risky behavior, and a few even encouraged choices like dropping out of school, ignoring caregivers’ guidance, and accessing drugs and weapons. In some tragic cases, parents have sued chatbot companies after their teens turned to AI for mental health support and later died by suicide.

Some therapy chatbots use prewritten scripts developed by mental health professionals, which can make them safer than general-purpose AIs. But even those can’t replace a therapist’s ability to read nonverbal cues, make diagnoses, or step in during a crisis.

A chatbot “can’t call for help, alert emergency services, or ensure your safety in a critical moment. That human layer of protection just isn’t there,” said Cranston Warren, clinical therapist at Loma Linda University Behavioral Health, in a July article.

Plus, unlike licensed human therapists, who must follow strict federal privacy laws, most AI platforms lack safeguards to protect users’ data. “Your interaction with AI is not guaranteed to be private,” Warren said. “Everything you feed into the model is being analyzed for data.”

Why do people seek AI therapy?

Despite these concerns, many people still turn to AI for help. In the U.S., getting mental health care can be hard because of cost, staffing shortages, and long wait times. It’s estimated there is only one mental health care provider for every 340 people nationwide. AI tools, on the other hand, are often free or low cost, and they respond right away without needing to fill out long forms.

Stigma and fear of judgment may also make AI chatbots feel safer than talking to a person. In an August study published in the Journal of Participatory Medicine, young adults said that they sometimes felt judged or anxious meeting face to face with a therapist and were more comfortable opening up to a chatbot.

Need free or low-cost mental health resources?

If you’re seeking human-led free and low-cost mental health support, there are helplines and treatment options available. Public Good News has compiled this list.

If you or anyone you know is considering suicide or self-harm or is anxious, depressed, or upset or needs to talk, call or text the Suicide & Crisis Lifeline at 988 or text the Crisis Text Line at 741-741. For international resources, here is a good place to begin.