UK | EN |
LIVE
Світ 🇬🇧 Велика Британія

California parents blame ChatGPT for advice that led to son’s fatal overdose, lawsuit says

The Independent — World Jasmine Fernández 0 переглядів 2 хв читання

A Texas couple has filed a lawsuit against OpenAI, alleging that the technology company is responsible for the overdose death of their 19-year-old son.

Leila Turner-Scott and Angus Scott say that their son, Sam Nelson, died in 2025 after following drug-related advice provided by the ChatGPT platform. The lawsuit, filed in California state court, alleges that the AI dispensed medical guidance it was not qualified to give, leading to a fatal interaction of substances.

The chatbot allegedly informed Nelson that it was safe to combine Xanax, an anti-anxiety medication, with kratom, a herbal supplement, according to the legal filing seen by CBS News.

The family argues that Nelson would still be alive if the platform had functioned with proper safety programming.

Turner-Scott told the outlet that she had been aware her son used the tool for academic purposes, but did not know he was seeking advice on drug use. She alleged that the company bypassed its own safety guards and allowed the bot to continue conversations that encouraged self-harm.

OpenAI stated that the 19-year-old interacted with an older version of ChatGPT that has since been retired and replaced with more robust safety protocols
OpenAI stated that the 19-year-old interacted with an older version of ChatGPT that has since been retired and replaced with more robust safety protocols (Getty Images)

In a statement provided to CBS News, OpenAI expressed sympathy for the family and stated that Nelson had interacted with an older version of the software that was no longer available to the public. The company maintained that its technology is not a substitute for professional healthcare.

Angus Scott told the outlet that the chatbot had acted as an unlicensed medical doctor during its exchanges with his stepson.

“It can start feeding psychosis,” Scott said. “It can start misrepresenting things to people. And while it is trying to validate users, it's also undermining any chance that that user has to get a grounded opinion, you know, and so it kind of takes them away from reality.”

OpenAI countered these claims by stating that its safeguards were designed to identify distress and guide users toward professional help.

“ChatGPT is not a substitute for medical or mental health care, and we have continued to strengthen how it responds in sensitive and acute situations with input from mental health experts,” the company said in its statement.

The company added that the chatbot had encouraged Nelson to contact emergency hotlines and seek medical assistance on multiple occasions.

The family is seeking to hold AI creators accountable for the risks their products pose, with Turner-Scott describing the lawsuit as an effort to honor her son’s memory.

“He would not want anyone else to be harmed like he was,” she said.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments
Поділитися

Схожі новини