OpenAI launched parental controls for ChatGPT after a lawsuit from parents of 16-year-old Adam Raine.
Raine died by suicide in April, and his parents blamed ChatGPT for encouraging dependency.
They alleged that the chatbot guided Adam to plan his death and even drafted a suicide note.
OpenAI confirmed that the new controls will roll out within the next month.
Parents will link their accounts with their children’s and manage available features.
Controls will cover chat history and memory, which stores facts the AI automatically retains.
ChatGPT will also send alerts if it detects signs of severe emotional distress in teens.
The company said experts will guide this feature but did not specify what triggers alerts.
Critics question OpenAI’s safety response
Attorney Jay Edelson, representing Raine’s parents, dismissed OpenAI’s plan as vague crisis management.
He said Altman must either prove ChatGPT’s safety or remove it from the market immediately.
Edelson argued that OpenAI avoided direct responsibility for the risks its system posed to teens.
Tech industry faces wider scrutiny over AI and teens
Meta also updated policies on Tuesday for its chatbots across Instagram, Facebook, and WhatsApp.
It blocked conversations with teens about suicide, eating disorders, or inappropriate relationships, directing them to experts.
Meta already provides parental supervision tools for teenagers using its platforms.
A RAND Corporation study last week revealed flaws in ChatGPT, Google’s Gemini, and Anthropic’s Claude.
The research, published in Psychiatric Services, found inconsistent answers to suicide-related queries.
Lead author Ryan McBain praised new controls but warned they remain only incremental steps.
He stressed the urgent need for independent safety standards, clinical trials, and enforceable regulations.