Trending

ChatGPT to develop mental health upgrades, including parental controls

Girl in the park holding a smartphone with ChatGPT artificial intelligence chatbot app on the screen.
Safeguards FILE PHOTO: OpenAI is developing more safeguards for mental health as well as parental controls. (Diego - stock.adobe.com)

ChatGPT parent company OpenAI is updating how its "models recognize and respond to signs of mental and emotional distress, guided by expert input."

The company announced on Tuesday that it is already working on the upgrades and expects to roll them out in the next 120 days, or in about four months, with work going beyond the launch date.

News of the upgrades comes about a week after the company said it was already making changes to the system, which included:

  • Intervention for people in crisis
  • Making it easier to reach emergency services and get help
  • Connections to trusted contacts
  • Protection for teens

The company, in the announcement this week, explained more about how it will help protect teens and the experts ChatGPT is partnering with.

ChatGPT now has an Expert Council on Well-Being and AI and a Global Physician Network.

The council is made up of youth development, mental health and human-computer interaction experts who will help "shape a clear, evidence-based vision for how AI can support people’s well-being and help them thrive." The group will work to set priorities and help design future safeguards, including parental controls.

The Global Physician Network will be composed of more than 250 doctors who have practiced in 60 countries, some of which will have expertise in topics such as eating disorders, substance abuse and adolescent health. About 90 of those doctors have already worked with ChatGPT on how the AI’s models should work when it comes to mental health.

Finally, the company is rolling out parental controls that will allow adults to link their account to their teen’s account. Parents and caregivers will be able to control how the system responds with “age-appropriate model behavior rules.” They will also be able to manage what features are turned off and get notifications when ChatGPT notes that a teen is in “a moment of acute distress.”

Recently, the parents of Adam Raine filed a wrongful death suit against OpenAI, saying its platform guided him as he decided to commit suicide, CNN reported.

The company had written last week, “recent heartbreaking cases of people using ChatGPT in the midst of acute crises,” but did not say that the development was connected directly to Raine’s case.

OpenAI did say in a separate statement directly related to Raine’s death that the safeguards can become unreliable when users are in a long conversation with the platform.

“ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources,” a company spokesperson said, according to CNN. “While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade. Safeguards are strongest when every element works as intended, and we will continually improve on them, guided by experts.”

0