OpenAI admits flaws in ‘sensitive’ cases, pledges changes.
Sam Altman attends the eleventh annual Breakthrough Prize Ceremony at The Barker Hangar in Santa Monica, California, USA, 05 April 2025. EFE-EPA/CAROLINE BREHMAN/FILE

OpenAI admits flaws in ‘sensitive’ cases, pledges changes.

0

New York, United States, (EFE).

Technology company OpenAI said Tuesday that its artificial intelligence chatbot ChatGPT makes mistakes in sensitive cases and promised changes after being sued over its technology’s alleged role in the suicide of a minor in the US.

The parents of a 16-year-old who took his own life after months of interacting with ChatGPT sued OpenAI and its CEO Sam Altman on Tuesday in California. They said the chatbot, running on the GPT-4o model, failed to apply safety measures despite recognizing the young man’s suicidal intentions.

OpenAI published a blog post titled “Helping people when they need it most.” While it did not reference the lawsuit, the company stated that ChatGPT is trained to recommend that users who “express suicidal intent” contact professional help organizations.

However, it said that despite having this and other safety mechanisms for when it detects that “someone is vulnerable and may be at risk,” ChatGPT’s systems “fall short” and the chatbot “did not behave as it should in sensitive situations.”

The company said its safety mechanisms work best in short exchanges and can fail in long interactions that “degrade” the AI’s training. It said it is specifically working on having the chatbot take action if it detects “suicidal intent” across multiple conversations.

ChatGPT, the company said, also recognizes when a user is under 18 to apply protective measures, but will now include parental controls so guardians can know how teenagers are using the technology. It is also exploring the possibility of connecting them directly with an emergency contact.

Among other changes, the company said its “mitigation” systems have focused on cases of self-harm but will now also cover instances of “emotional distress.” It added that the upcoming GPT-5 model will be updated to be able to “de-escalate” such situations by “connecting the person with reality.”

Furthermore, OpenAI is studying how to “connect people with certified therapists before they are in an acute crisis,” which would involve creating “a network of licensed professionals people can call directly through ChatGPT.” However, it said this possibility would take time to implement. EFE

Leave a comment

Please enter your name here
Please enter your comment!

No posts to display

LEAVE A REPLY

Please enter your name here
Please enter your comment!