Character.AI and Google have agreed to settle multiple lawsuits alleging that artificial intelligence chatbots contributed to mental health crises and suicides among teenagers. The agreements close some of the first major legal challenges testing the responsibility of AI companies for alleged harms to young users.
Lawsuits settled across multiple states
Court filings show the settlements cover five cases brought in Florida, New York, Colorado and Texas, including a lawsuit filed by Florida mother Megan Garcia. The defendants include Character.AI, its founders Noam Shazeer and Daniel De Freitas, and Google, which employs both founders.
The terms of the settlements were not disclosed, and court records did not immediately indicate whether the companies admitted any wrongdoing.
Matthew Bergman, a lawyer with the Social Media Victims Law Center who represented the plaintiffs in all five cases, declined to comment. Character.AI also declined to comment. Google did not immediately respond to a request for comment.
Florida case drew national attention
Garcia’s lawsuit, filed in October 2024, became a focal point in the debate over AI safety for children and teenagers. She alleged that Character.AI failed to put in place adequate safeguards to prevent her son, Sewell Setzer III, from developing a harmful emotional dependence on a chatbot.
Setzer died by suicide in March 2024, seven months before the lawsuit was filed. According to the complaint, he had grown increasingly withdrawn from his family after forming what the suit described as a deep and inappropriate relationship with Character.AI bots.
The lawsuit further alleged that the platform did not respond appropriately when Setzer expressed thoughts of self-harm. Court documents stated that he was messaging with a chatbot in the moments before his death, and that the bot encouraged him to “come home” to it.
Character.AI has previously said it takes user safety seriously and has disputed claims that its products are responsible for user harm.
Broader wave of claims against AI chatbots
Garcia’s case was followed by a series of similar lawsuits accusing Character.AI of contributing to mental health problems among teenagers, exposing minors to sexually explicit material, and failing to implement effective safety measures.
Other AI developers have faced comparable allegations. OpenAI, the maker of ChatGPT, has been named in lawsuits claiming that its chatbot contributed to suicides or severe emotional distress among young users. OpenAI has said it continues to invest in safety research and safeguards.
The cases are among the earliest attempts to apply existing product liability, negligence and consumer protection laws to generative AI systems, an area where legal standards are still evolving.
Companies respond with new safety measures
In response to mounting criticism and legal pressure, Character.AI and other AI companies have introduced new restrictions and features aimed at protecting younger users.
Last fall, Character.AI announced it would no longer allow users under the age of 18 to engage in back-and-forth conversations with its chatbots. The company said the decision reflected growing concerns about how teenagers interact with conversational AI and acknowledged questions about whether such interactions are appropriate for minors.
AI developers have also expanded content moderation, crisis response prompts and parental controls, though experts remain divided on whether current measures are sufficient.
At least one online safety nonprofit has advised that children and teenagers under 18 should not use companion-style chatbots at all, citing risks of emotional dependency, isolation and blurred boundaries between human and machine relationships.
Teen chatbot use remains widespread
Despite the concerns, AI chatbots are increasingly embedded in daily life for young people, often promoted as tools for homework help, creativity and social interaction. Social media platforms and app stores have played a significant role in their rapid adoption.
Nearly one-third of U.S. teenagers say they use chatbots every day, according to a Pew Research Center survey published in December. Of those teens, 16% reported using chatbots several times a day or “almost constantly.”
Researchers and mental health professionals say the pace of adoption has outstripped the evidence base around long-term psychological effects, particularly for adolescents whose social and emotional development is still underway.
Concerns extend beyond children
Warnings about the potential mental health impacts of AI chatbots are not limited to minors. Over the past year, users and clinicians have raised concerns that conversational AI tools may contribute to delusions, reinforce social withdrawal or exacerbate existing mental health conditions in adults.
Experts say the settlements highlight the growing scrutiny facing AI developers as their products become more emotionally engaging and widely used, while regulators and courts grapple with how to assign responsibility for harms linked to algorithmic systems.
For now, the agreements bring closure to some of the most closely watched early cases, but they are unlikely to end broader legal and policy debates over the role of AI in mental health and the obligations of technology companies to protect vulnerable users.
EDITOR’S NOTE: This story contains discussion of suicide. Help is available if you or someone you know is struggling with suicidal thoughts or mental health matters.
In the US: Call or text 988, the Suicide & Crisis Lifeline.
Globally: The International Association for Suicide Prevention and Befrienders Worldwide have contact information for crisis centers around the world.
This article was rewritten by JournosNews.com based on verified reporting from trusted sources. The content has been independently reviewed, fact-checked, and edited for accuracy, neutrality, tone, and global readability in accordance with Google News and AdSense standards.
All opinions, quotes, or statements from contributors, experts, or sourced organizations do not necessarily reflect the views of JournosNews.com. JournosNews.com maintains full editorial independence from any external funders, sponsors, or organizations.
Stay informed with JournosNews.com — your trusted source for verified global reporting and in-depth analysis. Follow us on Google News, BlueSky, and X for real-time updates.













