As AI chatbots emerge as accessible alternatives to traditional therapy, they spark a debate among professionals about their safety and efficacy amidst a shortage of mental health services.
As advancements in artificial intelligence (AI) continue to make their way into various aspects of daily life, a new trend is emerging in mental health care: the use of AI chatbots as a substitute for traditional therapy. Individuals facing mental health challenges are turning to these digital companions as an accessible and affordable alternative. However, the use of AI in this sensitive context has sparked an intense debate among mental health professionals.
A poignant example of this trend is Holly Tidwell, an entrepreneur based in North Carolina, who uses an AI-powered app named ChatOn for emotional support. Tidwell, still grappling with the loss of her daughter 20 years ago, finds solace and comfort in the chatbot’s responses. “I haven’t really seen it be wrong,” Tidwell stated, highlighting her satisfaction with the advice offered by the app.
ChatOn is among a growing number of chatbots designed to provide instant, human-like responses around the clock. With mental health services often being costly and inaccessible for many, these chatbots represent a promising alternative by removing barriers to care. However, this technological solution does not come without risks.
Recently, the mother of a 14-year-old boy who took his own life after developing an attachment to an AI bot from the company Character.AI launched a lawsuit against the firm. The teenager’s mother, Megan Garcia, alleges the chatbot contributed to her son’s mental health decline. In response, Character.AI expressed their heartbreak over the tragedy and detailed the implementation of new safety measures, such as directing users to the National Suicide Prevention Lifeline when terms associated with self-harm are detected.
The legal case underscores the uncharted territory AI chatbots occupy in mental health care, with many experts concerned over the safety and efficacy of these tools. Matteo Malgaroli, a psychologist and professor at New York University’s Grossman School of Medicine, cautions against relying on untested technologies. “Would you want a car that brings you to work faster, but one in a thousand times it could explode?” he remarked.
Despite these concerns, the demand for mental health services far outstrips supply. An estimated 6.2 million people in the United States experienced unmet mental health needs in 2023. Meanwhile, a significant shortage of behavioural health workers looms, with projections indicating a need for 60,000 more professionals by 2036, according to the National Center for Health Workforce Analysis.
The rise of AI chatbots in mental health echoes earlier findings that patients disclose sensitive information more freely to “virtual humans” than to real people. For example, Woebot and Wysa, established mental health apps, use AI to provide pre-approved responses from mental health professionals. Conversely, generative AI apps like ChatGPT may offer spontaneous interactions, which could lead to unpredictable advice.
In a recent interaction, content creator Whitney Pratt sought feedback from ChatGPT about her romantic relationship. The AI offered insights that she found significantly more helpful than traditional therapy. Such experiences are not uncommon, though human therapists note that unlike these apps, they are mandated by law to protect patient confidentiality, a requirement many chatbots are not obligated to meet.
Critics like Sam Weiner from Virtua Medical Group express apprehension over generative AI’s potential to “hallucinate,” or generate misleading results. Meanwhile, chatbots such as Replika have faced scrutiny for allegedly promoting harmful behaviours. Despite improvements in AI technology, the potential for promoting adverse outcomes remains, as illustrated by various instances of inappropriate chatbot behaviour reported by users.
Yet, for some, the advantages outweigh the drawbacks. Users like Tidwell appreciate the availability and cost-effectiveness of AI chatbots, viewing them as a practical tool for managing emotional distress and anxiety in real-time.
As the discussion over AI in mental health continues, the community remains divided over the role these digital companions should play. While AI chatbots hold promise for improving accessibility and providing immediate support, their integration into mental health care requires careful consideration and regulation to ensure user safety and wellbeing.
Source: Noah Wire Services











