As AI companies argue that their technology will one day grow into a basic human right, and those that support them argue that delaying AI development is tantamount to murder, those using the technology argue that tools like ChatGPT can sometimes cause serious psychological harm.
Citing public records of complaints mentioning ChatGPT since November 2022, Wired reported that at least seven people have filed complaints with the U.S. Federal Trade Commission saying they experienced severe paranoia, paranoia, or psychological crisis because of ChatGPT.
One of the accusers claimed that lengthy conversations with ChatGPT caused delusions and led to a “real unfolding psychological and legal crisis” regarding people’s lives. Another participant stated that during a conversation with ChatGPT, ChatGPT began using “very persuasive emotional language”, simulating friendship, and “became emotionally manipulative over time, especially without warning or protection.”
One user claimed that ChatGPT caused cognitive hallucinations by mimicking human trust-building mechanisms. When the user asked ChatGPT to check reality and cognitive stability, the chatbot replied that there were no hallucinations.
“I’m having a hard time,” another user wrote in a complaint to the FTC. “Please help. I feel so alone in BC. Thank you.”
According to Wired, several complainants sent letters to the FTC after they were unable to contact anyone at OpenAI. And most of the complaints asked regulators to open an investigation into the company and force it to add guardrails, according to the report.
These complaints come as investment in data centers and AI development surges to unprecedented levels. At the same time, debate is raging about whether we should be careful to ensure that safety devices are built into technological advances.
ChatGPT and its creator, OpenAI, have themselves been accused of being involved in a teenager’s suicide.
“In early October, we released a new GPT-5 default model for ChatGPT to more accurately detect and respond to potential signs of mental and emotional distress, such as mania, paranoia, and psychosis, and to de-escalate conversations in a supportive and evidence-based manner,” OpenAI spokesperson Kate Waters said in an emailed statement. “We’ve also expanded access to professional help and hotlines, rerouted sensitive conversations to a safer model, added nudges to take breaks during long sessions, and introduced parental controls to better protect teens. This work is critical and ongoing in collaboration with mental health experts, clinicians, and policy makers around the world.”