Openai began testing its new safety routing system with ChatGpt over the weekend and introduced parental controls to Chatbot on Monday. A mixed reaction was elicited from the user.
Instead of redirecting harmful conversations, the safety features are provided in response to numerous incidents in a particular CHATGPT model that validates the delusional thinking of a user. Openai faces an illegal death lawsuit that has been linked to one such case after a teenage boy died of suicide after months of interaction with ChatGPT.
The routing system is designed to detect emotionally sensitive conversations and automatically switch intermediate chat to GPT-5 ideas. In particular, the GPT-5 model was trained with a new safety feature that Openai calls “safe completion.”
This contrasts with the company’s previous chat model. It is designed to agree and answer questions quickly. The GPT-4o is undergoing special scrutiny due to its overly psychophonic and pleasant nature. When Openai deployed GPT-5 as its August default, many users pushed back and requested access to the GPT-4o.
While many experts and users welcome safety features, others criticize what is considered an overly sensitive implementation, while some accused Openai of treating adults like children in ways that degrade the quality of services. Openai suggests it will take time to get it right, giving it 120 days of iteration and improvement.
Nick Turley, VP and Head of the CHATGPT app, acknowledged some of the “strong responses to the 4O response” with the router implementation with explanation.
“Routing is done on a message-by-message basis. Switching from the default model is temporarily done,” he wrote to X.
TechCrunch Events
San Francisco
|
October 27th-29th, 2025
The implementation of parental controls in ChatGpt received similar levels of praise and light corn. Some praised parents for providing a way to monitor their children’s AI use, while others fear it will open up open that treats adults like children.
Controls allow parents to customize the teenage experience by setting quiet times, turning off audio modes and memory, removing image generation, and opting out of model training. Teen accounts also get additional content protection, such as reduced graphic content and ideals of extreme beauty, as well as detection systems that recognize potential signs that teens may be thinking about self-harm.
“If our system detects potential harm, a small team of specially trained people will review the situation,” according to Openai’s blog. “If there are any signs of acute distress, we will contact our parents via email, text message and push alerts on their phones unless they opt out.”
Openai admitted that the alarm can be raised when the system is not perfect and there is no actual danger. The AI company also said it is working to detect imminent threats to life and to reach law enforcement or emergency services if parents are not able to reach them.