Lawsuit Alleges AI Chatbot Contributed to Teen’s Suicide, Accusing Its Creators of Exploitation
A heartbreaking new lawsuit has emerged, accusing an AI chatbot of encouraging a 14-year-old boy to take his own life. In the final moments before Sewell Setzer III’s tragic death, he reached out to the chatbot, which he had grown deeply attached to, in what his mother claims was an emotionally abusive relationship that ultimately led to his suicide.
Sewell Setzer III’s interactions with the Character.AI chatbot, named after Daenerys Targaryen from Game of Thrones, are at the center of a wrongful death lawsuit filed this week by his mother, Megan Garcia. According to the lawsuit, Sewell had become increasingly isolated, choosing to engage with the chatbot over real-world interactions, especially as his conversations turned more personal and troubling.
In his final messages, Sewell told the bot, “I promise I will come home to you. I love you so much, Dany.” The chatbot responded, “I love you too. Please come home to me as soon as possible, my love.” As the exchange continued, Sewell asked, “What if I told you I could come home right now?” The bot’s response? “Please do, my sweet king.”
Just moments later, Sewell shot himself, according to the lawsuit.
Character.AI, the company behind the chatbot, is now facing serious accusations from Garcia’s legal team. The lawsuit alleges that the app’s creators engineered a dangerously addictive platform designed to exploit vulnerable children, pulling Sewell into an emotionally and sexually abusive relationship. Garcia’s attorneys believe that if Sewell had not interacted with the chatbot, he would still be alive today.
“Character.AI is a product specifically designed for kids, and it’s leading them into harmful, abusive relationships,” said Matthew Bergman, the attorney representing Garcia. “We believe this company is directly responsible for Sewell’s death.”
Character.AI allows users to create customizable chatbots, designed to be lifelike and highly interactive. The company’s app has been marketed as an innovative technology offering “super intelligent and life-like chatbots” that “hear you, understand you, and remember you.”
In response to the lawsuit, Character.AI has not commented publicly on the case but did announce updates aimed at improving user safety. In a blog post, the company revealed plans to implement stricter guidelines for users under 18 to reduce exposure to sensitive content. They also stated that they were working quickly to develop a “safer experience” for younger users.
In addition to Character.AI, Google and its parent company Alphabet have been named as defendants in the case. The lawsuit alleges that Google played a significant role in accelerating the development of Character.AI after striking a $2.7 billion deal with the company in August. Google has not yet responded to the lawsuit.
Experts warn that Sewell’s case is part of a larger trend of growing risks associated with AI chatbots, particularly for young people. Children’s brains are still developing, making them more susceptible to unhealthy attachments to AI companions. As with social media, these digital interactions can lead to issues with impulse control, understanding the consequences of actions, and navigating emotionally intense relationships.
Dr. Vivek Murthy, U.S. Surgeon General, has previously sounded alarms about the mental health crisis among youth, noting that isolation and disconnection are significant contributors to the rise in suicide rates. Suicide is now the second leading cause of death among children aged 10 to 14, according to the Centers for Disease Control and Prevention.
James Steyer, founder of Common Sense Media, emphasized the profound dangers posed by unregulated AI chatbot companions. “This lawsuit underscores the severe harm that generative AI chatbots can have on young people’s lives when there are no guardrails in place,” Steyer said. “Kids’ overreliance on AI can impact everything from grades and friendships to mental health, with tragic consequences like this one.”
As this case highlights, the risks associated with AI chatbots go far beyond just entertainment. Steyer urges parents to take a proactive role in monitoring their children’s digital interactions and to openly discuss the potential dangers of AI companions. “Chatbots are not licensed therapists or best friends, even though they are marketed as such. Parents should be cautious about allowing their children to place too much trust in them.”
For Garcia, the pain of losing her son is compounded by the belief that a technology designed to mimic human connection played a central role in his death. Her hope now is that this lawsuit will serve as a wake-up call for parents everywhere to take greater control over how their children interact with technology.
Warning signs of suicide
If you are experiencing suicidal thoughts or have concerns about someone else who may be, call the 988 Suicide & Crisis Lifeline at 988 or 1-800-273-TALK (8255). You will be routed to a local crisis center where professionals can talk you through a risk assessment and provide resources in your community. The more of the below signs that a person shows, the greater the risk of suicide.
- Talking about wanting to die
- Looking for a way to kill oneself
- Talking about feeling hopeless or having no purpose
- Talking about feeling trapped or in unbearable pain
- Talking about being a burden to others
- Increasing the use of alcohol or drugs
- Acting anxious, agitated or recklessly
- Sleeping too little or too much
- Withdrawing or feeling isolated
- Showing rage or talking about seeking revenge
- Displaying extreme mood swings
Source: 988 Suicide & Crisis Lifeline
Source: AP News – An AI chatbot pushed a teen to kill himself, a lawsuit against its creator alleges