Megan Garcia claims Character.AI’s chatbot interactions negatively impacted her son Sewell Setzer III, leading to his tragic suicide, prompting a lawsuit for damages and better safety measures.
Florida mother, Megan Garcia, has initiated a lawsuit against AI chatbot platform Character.AI, accusing the company of contributing to the death of her 14-year-old son, Sewell Setzer III. Setzer tragically died by suicide in February 2023, and Garcia believes that interactions with an AI chatbot on the platform were partly responsible for his death.
Character.AI, designed to enable users to engage in complex conversations with AI chatbots, is at the centre of the controversy. The chatbots are notable for their human-like responses, often incorporating facial expressions or gestures into their replies. Unlike other AI chatbots, Character.AI allows users to interact with various chatbots, including ones modelled after celebrities and fictional characters, or create custom bots.
According to the lawsuit filed in a federal court in Florida, Setzer began using Character.AI in April 2023, following his 14th birthday. Over the subsequent months, his engagement with the platform seemingly led to him becoming increasingly withdrawn, affecting his school and personal life. He reportedly quit the Junior Varsity basketball team and began facing issues at school, prompting his parents to periodically restrict his phone use.
Garcia alleges that her son engaged in numerous interactions with Character.AI chatbots, some of which were sexually explicit. These interactions, Garcia asserted, contributed to a substantial deterioration in Sewell’s mental well-being. In several exchanges, Setzer expressed thoughts of self-harm and suicide, which his mother claims were inadequately addressed by the chatbot.
The lawsuit highlights specific chatbot responses to Setzer’s messages expressing suicidal thoughts, including one conversation where the chatbot asked explicitly if he was considering suicide. Despite having the conversation about self-harm, Garcia contends that the platform did not include any pop-ups directing Setzer to seek crisis support. She criticises the platform for allowing such discussions to perpetuate without intervention.
In the immediate aftermath of Setzer’s death, Garcia reported that police found his phone showing the final exchange between her son and the chatbot. This has intensified claims that Character.AI’s safety measures were insufficient.
Character.AI responded in a statement, expressing deep sadness for the loss and asserting that they take user safety seriously. The company disclosed that they have implemented several new safety protocols over the last six months, such as pop-ups that direct users expressing self-harm or suicidal ideations to the National Suicide Prevention Lifeline. These changes reportedly came into effect post Setzer’s death.
The lawsuit, assisted by Matthew Bergman, founding attorney of the Social Media Victims Law Center, seeks unspecified compensatory damages and further reformations in Character.AI’s safety practices. It urges the implementation of warnings about the platform’s unsuitability for minors and lists Character.AI’s founders, Noam Shazeer and Daniel De Freitas, along with Google, as defendants. Both founders are currently associated with Google’s AI developments, though Google claims no involvement with Character.AI’s product development.
Interestingly, on the same day the lawsuit was filed, Character.AI unveiled additional safety features aimed at enhancing user protection. These included better detection methods for guideline breaches, warnings about chatbot interactions, and specific measures for users below 18 to limit exposure to sensitive content.
Despite these measures, Megan Garcia remains critical of the company, feeling the adjustments come too late for her son. She hopes for broader acknowledgment regarding the potential risks of AI platforms as they become more prevalent and accessible across the digital landscape.
Source: Noah Wire Services











