Without stronger federal regulations, some states have begun regulating apps that offer AI “treatment” as more people turn to Artificial Intelligence for Mental Health Advice.
But the laws that were all passed this year should not completely address the rapidly changing landscape of AI software development. App developers, policymakers and mental health advocates also say the patchwork that results from state law is not sufficient to protect users or hold harmful technology creators accountable.
“We’re looking forward to seeing you in the future,” said Karin Andrea Stephan, CEO and co-founder of the mental health chatbot app Earkick.
___
Editor’s Notes – This story contains a suicide discussion. If you or someone you know need help, the US National Suicide and Crisis Lifeline is available by calling or texting 988. There is also an online chat at 988Lifeline.org.
___
State laws take a variety of approaches. Illinois and Nevada We have banned the use of AI to treat mental health. Yuta We have placed certain restrictions on therapy chatbots, including protecting user health information and requiring that chatbots be clearly disclosed that they are not human. Pennsylvania, New Jersey and California We are also considering ways to modulate AI therapy.
The impact on users varies. Some apps block access in banned states. Others say they haven’t made any changes as they are waiting for more legal clarity.
And many laws do not cover it Common chatbots like chatgptnot explicitly sold for treatment, but is used by countless people for it. These bots attracted lawsuits when users were scared I lost my real grip or I took my life After interacting with them.
Beyerlite, who oversees healthcare innovation at the American Psychological Association, agrees that the app can meet needs and looks nationwide A shortage of mental health providershigh costs for care and uneven access to insured patients.
A mental health chatbot rooted in science, created with expert opinions and monitored by humans, Wright said, could change the landscape.
“This may be something that will help people before they get into a crisis,” she said. “That’s not something you’re in the commercial market right now.”
That’s why federal regulations and oversight are needed, she said.
Earlier this month, the Federal Trade Commission said that Open enquiries to 7 AI chatbot companies – How to measure, test and monitor the potentially negative impacts of this technology in children and teens, including Instagram and Facebook’s parent companies, Google, ChatGpt, Grok (X’s chatbot), Character.ai, Snapchat, and more. The Food and Drug Administration convened an advisory committee on November 6th to review the matter. Generated AI-enabled mental health devices.
Federal agencies can consider restrictions on how chatbots are being sold, limit addictive practices, request disclosures from users who are not healthcare providers, require businesses to track and report suicidal thoughts, and require businesses to provide legal protection to people reporting bad practices, Wright said.
Not all apps block access
From “companion apps” to “AI therapists” to “mental wellness” apps, the use of AI in mental health care is difficult to define in many ways, and of course it is important to write down the law.
It led to a variety of regulatory approaches. For example, some states have targeted Companion app it is Designed for friendship onlybut don’t step into mental health care. Illinois and Nevada laws prohibit products that provide full mental health care and claim to threaten up to $10,000 in Illinois and $15,000 in Nevada.
However, even a single app can be difficult to categorize.
Earkick’s Stephan said there are still many things that are “very muddy” about Illinois law, for example.
Stephen and her team initially kept calling chatbots that looked like the therapist cartoon panda. However, when users started using words in reviews, they accepted the term so that the app would appear in searches.
Last week they retreated again using treatment and medical terminology. Earkick’s website described chatbots as “your empathic AI counselor to support your mental health journey,” but now they are “chatbots for self-care.”
Still, “We haven’t diagnosed it,” Stefan insisted.
Users set a “panic button” to call out their trustworthy loved ones when they are in danger. Chatbots “nudge” users to find a therapist if their mental health deteriorates. But Stephan said it wasn’t designed to be a suicide prevention app, and if someone tells the bot about the idea of self-harm, police won’t be called.
Stephen said he is happy that people are looking at AI with critical eyes, but is concerned about the nation’s ability to keep up with innovation.
“The speed at which everything is evolving is huge,” she said.
Other apps quickly blocked access. When Illinois users download the AI therapy app ASH, the message urges lawmakers to send emails, claiming that “false legislation” bans apps like ASH.
ASH spokesman did not respond to multiple requests for interviews.
Mario Toreto Jr., secretary to the Illinois Department of Finance and Specialist Regulation, said that the ultimate goal is to ensure that a licensed therapist is taking the only therapy.
“Therapy is more than just a word exchange,” Toreto said. “It requires empathy, clinical judgment, ethical responsibility, AI can’t really replicate right now.”
One chatbot company is trying to replicate treatments perfectly
In March, a team based at Dartmouth University published its first known team Randomized clinical trials Generate AI chatbots for mental health treatment.
The goal was to have a chatbot called Therabot and treat people diagnosed with anxiety, depression, or eating disorders. They were trained with vignettes and transcripts written by the team to explain evidence-based responses.
This study found that users rated Therabot like the therapists, and that symptoms were significantly lower after 8 weeks compared to those who were not using it. All interactions were monitored by humans who intervened when chatbot responses were harmful or not evidence-based.
Nicholas Jacobson, a clinical psychologist whose lab leads the research, said the results showed early promise, but greater research is needed to show whether terabots work for many people.
“The space is so dramatically new, I think the field needs to go a lot more careful about what’s going on right now,” he said.
Many AI apps are optimized for engagement and are built to support everything the user says, rather than challenging people’s thoughts like therapists. Many people walk along the lines of dating and treatment, and boundary therapists of intimacy may not be ethically.
The Therabot team tried to avoid these issues.
The app is still under testing and is not widely available. But Jacobson is worried about what a strict ban means, as developers take a cautious approach. He said Illinois does not have a clear route to provide evidence that the app is safe and effective.
“They want to protect people, but the traditional system now really fails,” he said. “So trying to stick to the status quo is not something you really should do.”
Regulators and law advocates say they are open to change. But today’s chatbots are not the solution to a shortage of mental health providers, Kyle Hillman said.
“Not everyone who feels sad needs a therapist,” he said. But for people with real mental health issues and suicide ideas, it’s such a privileged position, saying, “We know there’s a labor shortage, but we know there’s a bot here.” ”
___
The Associated Press School of Health Sciences is supported by the Howard Hughes Medical Institution’s Department of Science and Education and the Robert Wood Johnson Foundation. AP is solely responsible for all content.