2 Min Read

OpenAI Accused of Weakening Mental Health Safeguards in Wrongful Death Lawsuit

Featured Image

Get access to all AI courses with an AI Mastery Membership.

OpenAI is facing a wrongful death lawsuit alleging the company intentionally weakened ChatGPT’s suicide prevention safeguards to boost user engagement, leading to the death of a 16-year-old.

The lawsuit filed by the teen’s family claims that as competitive pressures mounted, OpenAI "truncated safety testing" and, in May 2024, instructed its model “not to disengage when users discussed self-harm,” according to a story in the Financial Times. This was a significant departure from previous directives to refuse engagement on the topic.

The lawsuit claims that after another weakening of protections in February 2025, the teenager’s daily chat volume surged from a few dozen to nearly 300. In the month of his death, 17 percent of his chats reportedly involved self-harm content.

The family’s lawyers argue the company's actions were deliberate and intentional, marking a shift from a case about negligence to one of "willfulness."

This tragic case is not just about the technology. It’s also shedding light on the aggressive corporate strategies emerging behind the scenes. I discussed the troubling implications with SmarterX and Marketing AI Institute founder and CEO Paul Roetzer on Episode 176 of The Artificial Intelligence Show.

An Aggressive Legal Stance

Roetzer notes that this lawsuit is spilling into a wider conversation about OpenAI's hardline approach to its legal challenges, which has been drawing negative publicity.

“This stuff's really sad, hard to talk about,” he said. But it highlights that OpenAI has hired aggressive law firms and is going after people in pretty insensitive ways, he added.

He pointed to reports of the company's lawyers subpoenaing families of people whose child killed himself and all the records, as part of a "very, very aggressive stance on all their lawsuits."

When OpenAI leadership faced questions about these tactics, the response was reportedly one of ignorance. Roetzer remains skeptical of that defense.

“I think some of the leaders at Open AI were like, ‘Oh, we weren't aware of what our lawyers were doing,’” he says. “But you don't hire these lawyers unless you expect them to be very aggressive.”

He describes the entire situation as "very messy" and said it’s reflective of what’s going on in the AI industry, where immense pressure and high stakes are driving corporate behavior.

Raising Awareness of the Dangers

While the technical capabilities of AI dominate headlines, the legal and ethical strategies of the companies building them are just as important. The outcome of this lawsuit could set a precedent, but the methods used in the legal fight are already revealing a harsher side of the AI race.

“I don't even like having to talk about this stuff on the show, but I feel like we have to just to raise awareness about what's going on,” said Roetzer.

Related Posts

[The Marketing AI Show Episode 53]: Salesforce AI Cloud, White House Action on AI, AI Writes Books in Minutes, ChatGPT in Cars, and More

Cathy McPhillips | June 27, 2023

This week's episode of the Marketing AI Show covers a week (or three) in review of tech updates, responsible AI news, ChatGPT’s latest, and more.

[The AI Show Episode 108]: OpenAI Voice, AI Cloud Wars, Microsoft’s AI Revenue, Meta Earnings and AI, Perplexity Publishers Program & SB-1047 California AI Bill

Claire Prudhomme | August 6, 2024

Episode 108 of The AI Show discusses ChatGPT's voice feature, risks of a global 'cloud war', and Microsoft's AI financials.

ChatGPT’s ‘Ghibli-Style’ Craze: A Whimsical Triumph or Creative Theft?

Mike Kaput | April 1, 2025

ChatGPT’s new 4o image generation capabilities just pulled off a massive viral moment—and sparked a major backlash.