In recent months, doctors and mental health professionals have raised a red flag: more young people are turning to chatbots like ChatGPT for emotional support and even therapy. While AI tools can provide quick responses and a sense of comfort in the moment, experts warn that relying on them for mental health care comes with serious risks.
Why Young People Are Turning to Chatbots
It’s easy to see why chatbots have become appealing. They’re available 24/7, free to access, and provide instant replies without judgment. For teenagers and young adults navigating stress, loneliness, or anxiety, an AI that “listens” may feel safer than opening up to friends, family, or even a therapist.
But that accessibility creates a false sense of security. Unlike licensed professionals, chatbots cannot diagnose, provide personalized treatment, or respond to crises effectively.
The Risks of AI “Therapy”
Doctors highlight three major concerns:
- Inaccurate or harmful advice – AI models are trained on vast amounts of text, but they do not truly understand human emotion or mental illness. Their responses may unintentionally reinforce harmful thoughts or spread misinformation.
- Lack of crisis response – If someone expresses suicidal thoughts, a chatbot cannot intervene, call for help, or provide emergency resources in the way a trained counselor can.
- Delayed real treatment – Relying on chatbots can prevent people from seeking professional care when it’s most needed, prolonging suffering and worsening symptoms.
Why Human Connection Matters
Therapy is more than conversation. It’s a relationship built on trust, empathy, and expertise. A therapist doesn’t just listen, they notice patterns, challenge destructive thinking, and provide strategies tailored to a person’s unique history and struggles. AI can not replicate that. It can simulate empathy, but it cannot genuinely care, adapt to complex emotions, or guide someone through the nuanced journey of recovery.