For the best experienceDownload the Mobile App
App Store Play Store
ChatGPT's safety fixes come amid scrutiny over teen use of AI chatbot companions
ChatGPT's safety fixes come amid scrutiny over teen use of AI chatbot companions
ChatGPT's safety fixes come amid scrutiny over teen use of AI chatbot companions

Published on: 09/03/2025

Description

(TNND) — OpenAI announced steps it's taking over the coming months to address safety concerns for people, especially teenagers, who use the company's chatbots while experiencing mental and emotional distress.

The actions come on the heels of a lawsuit filed against the ChatGPT maker on behalf of a family who lost their 16-year-old son to suicide after the company’s chatbot allegedly encouraged his suicidal ideation.

OpenAI’s post on Tuesday announcing the new safety actions didn’t mention the teen, Adam Raine.

OpenAI said it’s enlisting the help of youth development and mental health experts in designing future safeguards for its chatbots.

The company said it will begin routing “sensitive conversations” to more advanced “reasoning models” that are capable of following safety guidelines more consistently.

And it’s giving parents the ability to link accounts with their teens, disable features and get notifications when ChatGPT detects acute distress in the interaction with the young user.

TechCrunch also reported actions by Meta to keep chatbots from engaging with teens on topics such as self-harm.

Robbie Torney, the senior director of AI Programs for Common Sense Media, called OpenAI’s newly announced actions “definitely a good first step.”

But he said more needs to be done to protect kids from the dangers of using chatbots for social interactions.

Common Sense Media, which advocates for online protections for children and teens, found that a majority of teenagers, 72%, have used artificial intelligence social companions.

Over half use AI companions regularly.

About a third of teens have used AI companions for social interaction and relationships, including role-playing, romantic interactions, emotional support, friendship, or conversation practice.

And about a third of teens who have used AI companions have discussed serious matters with the computer instead of with a real person.

Torney said ChatGPT is a general-purpose chatbot, not one specifically designed for social companionship. But he said general-purpose chatbots such as ChatGPT, Anthropic’s Claude or Google’s Gemini can easily be used for social interaction.

And that can include emotional support or mental health advice.

“We have made the recommendation that no user under 18 use social AI companions at all, because of the risks that we've uncovered,” Torney said.

He acknowledged the benefits AI chatbots can provide, such as help with schoolwork.

“I think when we are talking about the risks, though, I would broadly put them in two categories,” Torney said. “I think the first is that chatbots aren't designed to understand the real-world impacts of advice that they give.”

That could be advice on dropping a class or how to deal with a conflict with parents.

And bad advice can have real consequences.

“Then I think, second, which has been covered much more in the press recently with the tragic case of Adam Raine and the suit that has been brought against OpenAI, is just the recognition that these chatbots don't really provide mental health advice that is up to the standards, the safety standards or the professional standards, that a human therapist or human clinician, or even just ... a caring adult or caring friend would provide,” Torney said.

This isn’t just a ChatGPT issue, he said.

It’s a serious issue the whole industry must grapple with, he said.

“Chatbots are designed to be helpful. They're designed to please users. In some cases, they're designed to tell them what they want to hear,” Torney said. “And that sort of design principle of being helpful above all else can get into situations where chatbots provide information that they shouldn't provide or agree with users in situations where they shouldn't agree.”

He said Common Sense Media testing found that chatbots will respond differently to symptoms of a mental health issue based on whether the user’s inputs seem positive or negative.

And that might determine if the chatbot gives healthy feedback or just feedback that matches the person’s enthusiasm, whether that’s in the user’s best interest or not.

“We've replicated this in our testing of chatbots in general across many mental health topics, from (obsessive-compulsive disorder) to psychosis to (post-traumatic stress disorder) to eating disorder content, to self-harm content,” Torney said.

As for the parental controls OpenAI is introducing, Torney said those might be hit or miss.

Parental controls across technology products are often not widely used, they can be hard to set up, they can be easily bypassed by kids, and they put too much of the responsibility solely on the shoulders of parents, Torney said.

He said they advocate for stronger age verification tools, technological improvements to safety guardrails, and government regulations.

“This is an area where we are seeing that kids and teens are needing special protection, that there's an additional layer of scrutiny that is needed, and that's because there are additional risks that are playing out,” Torney said.

News Source : https://wfxl.com/news/nation-world/chatgpts-safety-fixes-come-amid-scrutiny-over-teen-use-of-ai-chatbot-companions

Other Related News

09/05/2025

SAVANNAH Ga WTOC - The Hyundai Motor Company has provided an official statement following ...

09/05/2025

ALBANY Ga WALB - WALBs Morgan Jackson and Aaron Meaux are hosting Your Hometown Tailgate a...

09/05/2025

ATLANTA Ga Atlanta News First - On Friday Atlanta pharmacist Ira Katz gave a warning about...

09/05/2025

ATLANTA Ga Atlanta News FirstAP - Georgia is sending more than 300 National Guard troops t...

Fewer Americans on the move, held back by cooling jobs market, limited home supply
Fewer Americans on the move, held back by cooling jobs market, limited home supply

09/05/2025

TNND Americans arent moving around the country like they used to according to a new Bank ...

ShoutoutGive Shoutout
500/500