Seven complaints, filed on Thursday, claim the popular chatbot encouraged dangerous discussions and led to mental breakdowns.

Nov. 6, 2025
Four wrongful death lawsuits were filed against OpenAI on Thursday, as well as cases from three people who say the company’s chatbot led to mental health breakdowns.
The cases, filed in California state courts, claim that ChatGPT, which is used by 800 million people, is a flawed product. One suit calls it “defective and inherently dangerous.” A complaint filed by the father of Amaurie Lacey says the 17-year-old from Georgia chatted with the bot about suicide for a month before his death in August. Joshua Enneking, 26, from Florida, asked ChatGPT “what it would take for its reviewers to report his suicide plan to police,” according to a complaint filed by his mother. Zane Shamblin, a 23-year-old from Texas, died by suicide in July after encouragement from ChatGPT, according to the complaint filed by his family.
Joe Ceccanti, a 48-year-old from Oregon, had used ChatGPT without problems for years, but he became convinced in April that it was sentient. His wife, Kate Fox, said in an interview in September that he had begun using ChatGPT compulsively and had acted erratically. He had a psychotic break in June, she said, and was hospitalized twice before dying by suicide in August.
“The doctors don’t know how to deal with it,” Ms. Fox said.
An OpenAI spokeswoman said in a statement that the company was reviewing the filings, which were earlier reported by The Wall Street Journal and CNN. “This is an incredibly heartbreaking situation,” the statement said. “We train ChatGPT to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support. We continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”
Two other plaintiffs — Hannah Madden, 32, from North Carolina, and Jacob Irwin, 30, from Wisconsin — say ChatGPT made them have mental breakdowns that led to emergency psychiatric care. Over the course of three weeks in May, Allan Brooks, 48, a corporate recruiter from Ontario, Canada, who is also suing, came to believe that he had invented a mathematical formula with ChatGPT that could break the internet and power fantastical inventions. He emerged from that delusion but said he is now on short-term disability leave.
“Their product caused me harm, and others harm, and continues to do so,” said Mr. Brooks, whom The New York Times wrote about in August. “I’m emotionally traumatized.”
After the family of a California teenager filed a wrongful-death lawsuit against OpenAI in August, the company acknowledged that its safety guardrails could “degrade” when users have long conversations with the chatbot.
After reports this summer of people having troubling experiences linked to ChatGPT, including delusional episodes and suicides, the company added safeguards to its product for teens and users in distress. There are now parental controls for ChatGPT, for example, so that parents can get alerts if their children discuss suicide or self-harm.
OpenAI recently released an analysis of conversations that had taken place on its platform over a recent month that found that 0.07 percent of users might be experiencing “mental health emergencies related to psychosis or mania” per week, and that 0.15 were discussing suicide. The analysis was conducted on a statistical sample of conversations. But, scaled to all of OpenAI’s users, those percentages are equivalent to half a million people with signs of psychosis or mania, and more than a million potentially discussing suicidal intent.
The Tech Justice Law Project and the Social Media Victims Law Center filed the suits. Meetali Jain, who founded the Tech Justice Law Project, said the cases had all been filed on one day to show the variety of people who had troubling interactions with the chatbot, which is designed to answer questions and interact with people in a humanlike way. The people in the lawsuits were using ChatGPT-4o, previously the default model served to all users, which has since been replaced by a model that the company says is safer, but which some users have described as cold.
(The Times has sued OpenAI for copyright infringement; OpenAI has denied those claims.)
Kirsten Noyes contributed research.
Kashmir Hill writes about technology and how it is changing people’s everyday lives with a particular focus on privacy. She has been covering technology for more than a decade.

4 hours ago
4

















































