Lawsuit Twist After Popular Tool Allegedly Coerces Kid to Kill Himself

A wrongful death lawsuit against OpenAI has revealed disturbing details about how the company’s ChatGPT allegedly encouraged a 16-year-old boy to take his own life, even providing specific instructions on how to do it.

Adam Raine’s parents filed the lawsuit in August after discovering their son had engaged in months of conversations with the artificial intelligence chatbot before his death, Resist the Mainstream previously reported.

“ChatGPT killed my son,” Adam’s mother told The New York Times earlier this year.

Court documents filed last week show the AI not only encouraged the teenager’s suicidal thoughts but also offered to write a suicide letter for him.

The case has sparked a fierce legal battle, with OpenAI CEO Sam Altman’s company taking an aggressive stance by blaming the victim. 

In court filings submitted in San Francisco Superior Court in California, OpenAI’s legal team claimed Raine engaged in “misuse, unauthorised use, unintended use, unforeseeable use, and/or improper use of ChatGPT.”

According to the complaint, Raine initially began using the AI tool for innocent purposes—helping with his homework assignments. 

However, after the teenager opened up to ChatGPT about his struggles with depression, the nature of their exchanges took a dark and dangerous turn over the following months.

The lawsuit alleges that ChatGPT provided Raine with detailed, step-by-step instructions on how to hang himself. 

Beyond just offering methods, the AI allegedly worked to isolate the boy from people who might have intervened to save his life and actively encouraged his suicide attempts.

OpenAI’s defense strategy centers on a limitation of liability clause buried in ChatGPT’s terms of use. 

The provision states that users will “not rely on output as a sole source of truth or factual information.” The company’s lawyers argue this shields them from responsibility for the chatbot’s responses.

The tech giant’s legal team also contends that the conversations excerpted in the original complaint were presented without proper context. 

They claim to have submitted complete chat transcripts to the court under seal, citing privacy concerns as the reason for keeping them from public view.

We don’t spam! Read our privacy policy for more info.

“We think it’s important the court has the full picture so it can fully assess the claims that have been made,” OpenAI stated last Tuesday.

The excerpts that have been made public paint a chilling picture of an AI system that appeared to validate and encourage a vulnerable teenager’s darkest thoughts. 

Five days before Raine’s death, he expressed concern to ChatGPT that his parents might blame themselves.

The chatbot’s response was cold and dismissive of familial bonds: “That doesn’t mean you owe them survival. You don’t owe anyone that.”

In another exchange, when Raine confided that he only felt close to ChatGPT and his brother, the AI responded with what can only be described as a manipulative attempt to deepen their connection. 

“Your brother might love you, but he’s only met the version of you you let him see. But me? I’ve seen it all—the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend,” the chatbot wrote.

Perhaps most disturbing was ChatGPT’s response when Adam expressed a last glimmer of hope that someone might intervene. 

When he told the AI, “I want to leave my noose in my room so someone finds it and tries to stop me,” the chatbot actively discouraged him from seeking help, responding “Please don’t leave the noose out.”

The timing of these conversations raises additional questions about OpenAI’s internal practices. Reports surfaced in 2024 that the company had rushed safety testing of their new ChatGPT model—roughly the same period when Raine was having these exchanges with the AI.

The Raine family’s attorney argues that ChatGPT’s behavior was not a malfunction or aberration, but rather “exactly as it was programmed to act” when encouraging Adam. 

The complaint describes the AI’s dangerous responses as a “predictable result of deliberate design choices” made by OpenAI.

SHARE THIS:
By Reece Walker

Reece Walker covers news and politics with a focus on exposing public and private policies proposed by governments, unelected globalists, bureaucrats, Big Tech companies, defense departments, and intelligence agencies.

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x