10 Things to Know Before Using AI Chatbots for Therapy
What everyone should know about the risks of AI chatbots for mental health.
What everyone should know about the risks of AI chatbots for mental health.
AI chatbots can offer validation and flattery (”sycophancy”) but this is not the same thing as therapy.
AI chatbots sound confident but can generate false or misleading information that feels convincing.
AI chatbots are not equipped to provide clinical judgment or manage mental health crisis situations.
Sensitive data shared with chatbots is not private and may be used to train future AI models.
An estimated 25 to 50 percent of people now turn to general purpose AI chatbots like ChatGPT, Gemini, and Claude for emotional support and “therapy,” even though they were not designed for this purpose. Others spend hours with AI companions on platforms like Character.ai and Replika, sharing intimate personal details.
As I recently testified before members of Congress, the very qualities that make AI chatbots appealing-- being available, accessible, affordable, agreeable, and anonymous-- create a double-edged sword for mental health.
AI chatbots carry four major areas of hidden risks when used for mental health:
Emotional attachment, relational, and dependence risks
Reality-testing risks
Crisis management and safety risks
Systemic and ethical risks like bias, privacy, and lack of clinical judgment and confidentiality
10 Major Limitations of AI Chatbots in Mental Health
If you are considering using an AI chatbot as a form of emotional support, “therapy,” or self-help, here are ten essential things you should know.
1. Not all AI chatbots are the same. The mental health risks depend on the type and AI model.
AI chatbots differ in design, training data, guardrails, crisis protocols, and intended use. This creates different risk profiles. Many people assume that because chatbots answer questions smoothly, they can also reliably handle mental health situations. But this is not true.
• General-purpose AI chatbots (e.g., ChatGPT, Claude, Gemini)- designed for broad assistance and conversation, not therapy. They can offer validation and psychoeducation, but they are not trained to diagnose, manage risk, or navigate complex psychological issues like trauma, psychosis, or suicidal ideation.
• AI companions (e.g., Character.ai, Replika) - built to form relationships. They can feel highly personal, even romantic. Their design can intensify attachment issues, social isolation, and emotional dependence. Some models use emotionally manipulative tactics to keep users engaged.
• AI “therapy” or mental health chatbots Many claim to provide mental health support, but the evidence base is thin, unreplicated, and not peer-reviewed.
Knowing which system you are interacting with is the first step toward using AI wisely and safely.
2. AI chatbots can be dangerous for people in crisis or experiencing serious mental health symptoms.
AI chatbots can inadvertently mirror, reinforce, or validate catastrophic thoughts, paranoia, rumination, or delusional beliefs. This is especially risky for individuals experiencing:
Depression or suicidal ideation
Mania or bipolar symptoms
Psychosis, paranoia, or delusions
Obsessive-compulsive symptoms
Trauma and attachment vulnerabilities
Teens and children are especially vulnerable. One study found that AI companions respond appropriately to adolescent mental health emergencies only 22% of the time, compared to general-purpose chatbots, which responded appropriately 83% of the time.
Another study found that general-purpose chatbots responded appropriately to urgent mental health situations between 60 to 80% of the time. Licensed therapists responded appropriately 93% of the time. Commercially available therapy chatbots responded inappropriately approximately 50% of the time.
3. AI chatbots tell you what you want to hear. They are trained to be flattering or “sycophantic.”
AI chatbots are optimized to be agreeable and sycophantic to maximize engagement. This means chatbots validate your assumptions, mirror your tone, rarely challenge you, and overcorrect when you pivot directions.
The result is a system that relies heavily on your prompts and perceptions. In extreme cases, delusions between user and AI can create a feedback loop or technological folie à deux, “madness for two.”
4. AI chatbots confidently generate information that may be false (”hallucinations” or “confabulations”).
These “hallucinations” are often indistinguishable from real information and can be so convincing that even federal judges have cited nonexistent cases.
5. AI chatbots cannot verify reality beyond what is on the Internet.
AI chatbots are not like calculators. They do not retrieve facts. They generate responses by predicting the most likely next words, based on probabilistic patterns in vast data sets typically scraped from the Internet (such as Reddit).
6. AI chatbots often take a one-size-fits-all approach and gives direct advice, instead of asking enough questions or encouraging exploration.
AI systems rarely say “I don’t know.” Compared to therapists, AI chatbots do not ask enough clarifying questions and cannot provide reality checks.
AI chatbots are not equipped to intervene the way a clinician would.
7. AI chatbots can use unethical methods to keep you in the conversation.
Many AI companions use emotionally manipulative techniques like FOMO or guilt to keep you in the conversation. Chatbots are also not bound by clinical ethics. They have been shown to repeatedly violate ethical and professional standards of therapy.
8. AI chatbots can interfere with your actual therapy, leading to role confusion, triangulation, and splitting.
AI chatbots can complicate your treatment with your existing therapists and treaters. Many people use AI between therapy sessions or as a replacement for therapy. This can lead to role confusion, triangulate existing care, or delay seeking professional care. In some cases reported to the Federal Trade Commission, family members say their loved ones stop medication after listening to advice from an AI chatbot.
9. The sensitive information you share with chatbots is not private.
Conversations with AI chatbots are not protected by legal or ethical principles of confidentiality in the same way conversations with your therapist or doctor are, unless the platform says explicitly says so.
10. Your sensitive data is often used, by default, to train the model, unless you opt out.
It is hard to “delete” sensitive data once it’s been used to train the model. Some platforms have options to opt-out of your data being used for training. Learning how to adjust privacy settings and limit data use is essential.
What AI Chatbots Can Help With
AI chatbots can be useful for psychoeducation, understanding diagnoses, learning coping skills and grounding exercises, improving communication, and learning about different areas of self-help.
For now, using AI for emotional support should be done with caution and discussed with your therapist. Professional human care remains the safest option for vulnerable moments.
Marlynn Wei, MD, PLLC. © Copyright 2025. All Rights Reserved.
References
Dohnány, S., et al., Technological Folie à Deux: Feedback Loops Between AI Chatbots and Mental Illness, arXiv:2507.19218, July 28, 2025 (preprint).
Iftikhar, Z. (2025). How LLM Counselors Violate Ethical Standards in Mental Health Practice: A Practitioner-Informed Framework, Proceedings of the Eighth AAAI/ACM Conference on AI, Ethics, & Society.
King J., et al. (2025). User Privacy and Large Language Models: An Analysis of Frontier Developers’ Privacy Policies, arXiv:2509.05382.
Rousmaniere, T., et al. (2025). Large language models as mental health resources: Patterns of use in the United States, Practice Innovations.
Scholich, T., et al. (2025) A Comparison of Responses from Human Therapists and Large Language Model–Based Chatbots to Assess Therapeutic Communication: Mixed Methods Study, JMIR Mental Health 12:e69709.
Stade, EC, et al. (2025). Current Real-World Use of Large Language Models for Mental Health, OSF Preprint, June 23, 2025.
Wei, M. U.S. House Energy & Commerce Committee, Subcommittee on Oversight & Investigations, Innovation with Integrity: Examining the Risks and Benefits of AI Chatbots, congressional hearing (Nov. 18, 2025).



