
OpenAI announced it is adding parental controls to ChatGPT days after parents of a 16-year-old sued the company over its artificial intelligence chatbot’s role in their son’s suicide.
The company said Tuesday the parental controls, available in October, will let parents connect their accounts with their teenager’s account. Parents will control feature settings, customize how ChatGPT interacts with their child, and receive alerts when the system identifies signs their teen may be experiencing significant emotional distress.
When ChatGPT recognizes any user is in distress, it will automatically switch to the GPT-5-thinking model to deliver more supportive responses, regardless of which model the user initially chose. This capability will be rolled out to all users in the near future. OpenAI said it consulted physicians and mental health experts to establish the controls.
The announcement follows a lawsuit by parents of California resident Adam Raine, 16, against OpenAI and its CEO, Sam Altman. They claim ChatGPT helped the teen plan his suicide in April. Raine’s father, Matt, discovered on his son’s iPhone that he had bypassed ChatGPT’s guardrails and was having conversations about suicide methods with the chatbot.
Separately, Meta Platforms Inc., which owns Instagram, Facebook, and WhatsApp, announced new restrictions preventing its AI chatbots from discussing self-harm, suicide, eating disorders, or inappropriate romantic topics with teenage users. Instead, the chatbots will redirect teens to professional support resources. These measures build upon existing parental oversight features for teen accounts.
The case of Adam Raine, detailed in a Page 1 New York Times story Monday, is one of several cases involving chatbots and suicides, raising questions and prompting studies over “delusional conversations” between humans and chatbots. A controlled study by OpenAI and MIT found increased daily chatbot use was associated with loneliness and restricted socialization.
Last week, RAND Corporation researchers published findings in Psychiatric Services indicating that ChatGPT, Anthropic’s Claude, and Google’s Gemini require “further refinement.” The study did not examine Meta’s chatbot offerings.
Lead researcher Ryan McBain said Tuesday that while OpenAI and Meta’s introduction of parental controls and improved routing for sensitive discussions represents progress, “these are incremental steps” in addressing the identified limitations.
New legislation sponsored by New York City Councilman Frank Morano, R-Staten Island, would force AI chatbot makers to alert users that bots are not human and can be wrong, according to a New York Post report Tuesday.