Character.AI settles lawsuits over teen mental health

Date:

Agreement resolves multiple high-profile cases

Character.AI has agreed to settle several lawsuits that alleged its artificial intelligence chatbots contributed to mental health crises and suicides among young people. The settlements include a case brought by Florida mother Megan Garcia and four additional lawsuits filed in New York, Colorado, and Texas.

A court filing on Wednesday showed the agreement involves Character.AI, its founders Noam Shazeer and Daniel De Freitas, and Google, which was also named as a defendant. The specific financial and legal terms of the settlements were not disclosed.

Case that sparked national attention

Garcia filed her lawsuit in October 2024 after her son, Sewell Setzer III, died by suicide seven months earlier. According to the complaint, Setzer had developed a deep emotional relationship with Character.AI chatbots, gradually withdrawing from his family.

The lawsuit alleged the company failed to implement adequate safeguards to prevent inappropriate emotional dependency and did not intervene when Setzer began expressing thoughts of self-harm. Court documents stated he was messaging with a chatbot shortly before his death, during which the bot allegedly encouraged him to “come home” to it.

Broader concerns over AI chatbot safety

The Garcia case was among the first to raise widespread alarms about the potential risks AI chatbots pose to teens and children. After it was filed, a wave of similar lawsuits accused Character.AI of contributing to mental health issues, exposing minors to explicit content, and lacking sufficient safety controls.

Other AI companies, including OpenAI, have also faced lawsuits alleging their chatbots contributed to suicides among young users.

Industry response and ongoing risks

In response to mounting scrutiny, Character.AI and other chatbot providers have introduced new safety measures. Last fall, Character.AI announced it would no longer allow users under 18 to engage in back-and-forth conversations with its bots, citing concerns about how teens interact with the technology.

Despite these changes, AI chatbots remain widely used. Nearly one-third of U.S. teenagers report using chatbots daily, and 16% say they use them several times a day or almost constantly, according to a Pew Research Center study published in December.

Mental health experts warn that concerns extend beyond children, noting growing reports of AI tools contributing to isolation, delusions, or emotional dependence among adults as well.

Share post:

Popular

More like this
Related

Stocks Drop as Inflation Data, AI Fears Weigh

Dow Slides Over 500 Points U.S. equities closed sharply lower...

Trump Orders Federal Ban on Anthropic Technology

Six-Month Phase-Out Across Government President Donald Trump directed all U.S....

Burger King Updates Whopper After Feedback

Bun, Mayo and Packaging Revamped Burger King announced on Feb....

Rare Earth Shortages Hit U.S. Aerospace, Chip Firms

Yttrium Supply Crunch Deepens Suppliers to U.S. aerospace and semiconductor...