AI companies are announcing the change amid growing concern over the impact of chatbots on youth mental health.
Released on September 3, 2025
Openai has announced plans to introduce parental control for ChatGPT amid growing controversy over how artificial intelligence is affecting the mental health of young people.
In a blog post Tuesday, the California-based AI company said it is rolling out its functionality in recognition of families who need support “set healthy guidelines that fit the unique development phase of teenagers.”
Under the changes, parents can link their ChatGPT accounts to their child’s accounts, disable certain features including memory and chat history, and control how chatbots respond to queries via “age-appropriate model behavior rules”.
Parents can also receive notifications when teenage signs show signs of distress, Openai added that they are looking for expert input to implement a feature that “supports trust between parents and teens.”
Last week, Openai announced a series of measures aimed at increasing the safety of vulnerable users, said the changes will take effect within the next month.
“These steps are just the beginning,” the company said.
“We will continue to learn and strengthen our expert-guided approach with the goal of making ChatGpt as useful as possible. We look forward to sharing our progress over the next 120 days.”
Openai’s announcement comes a week after a California couple filed a lawsuit accusing them of being liable for the suicide of their 16-year-old son.
Matt and Maria Raine examine the “most harmful and self-destructive thoughts” of their son Adam, and in their lawsuit his death was a “predictable result of intentional design choices.”
Openai, who previously expressed his sadness about his teenage death, did not explicitly mention the incident in his announcement regarding parental control.
Jay Edelson, the lawyer representing the Lane family in the lawsuit, dismissed the planned changes to the Open in an attempt to “change the argument.”
“They say the product should be more sensitive to people in crisis, more “helpful” and a little more “empathy,” and experts will understand that,” Edelson said in a statement.
“We strategically understand why they want it. Openai can’t respond to what actually happened to Adam, because Adam’s case isn’t that ChatGpt failed to “help.”
The use of AI models by people experiencing severe mental distress has been the focus of raising concerns amidst therapists or friends.
In a study published in Psychiatry Services last month, researchers found that while ChatGpt, Google’s Gemini, and Anthropic’s Claude followed clinical best practices when answering high-risk questions about suicide, they were inconsistent when dealing with queries that pose “moderate levels of risk.”
“These findings suggest the need for further improvements to ensure that LLMs can be used safely and effectively in distribution of mental health information, particularly in high-stakes scenarios involving suicidal ideation,” the authors said.
If you or someone you know is at risk of suicide, these organizations may be able to help.

