California Governor Gavin Newsom on Monday signed a landmark bill regulating AI companion chatbots, making the state the first state in the nation to require AI chatbot operators to implement safety protocols for their AI companions.
This law, SB 243, aims to protect children and vulnerable users from harms associated with the use of AI companion chatbots. The law would hold companies, from large labs like Meta and OpenAI to more focused companion startups like Character AI and Replika, legally liable if their chatbots don’t meet the law’s standards.
SB 243 was introduced in January by state Sens. Steve Padilla and Josh Becker and gained momentum following the death of teenager Adam Lane, who committed suicide after a long series of suicidal conversations with OpenAI’s ChatGPT. The bill also responds to leaked internal documents that allegedly showed Meta’s chatbots were allowed to have “romantic” and “sensual” chats with children. Most recently, a Colorado family filed a lawsuit against role-playing startup Character AI after their 13-year-old daughter committed suicide after repeatedly having questionable sexual conversations with the company’s chatbot.
“Emerging technologies like chatbots and social media can inspire, educate, and connect, but without real guardrails, they can also be misused, mislead, and put children at risk,” Newsom said in a statement. “We have seen some truly horrifying and tragic examples of young people being harmed by unregulated technology. We cannot stand by and watch as companies continue to operate without the necessary limits and accountability. We can continue to lead with AI and technology, but we must do so responsibly. We will protect children every step of the way. Children’s safety is not for sale.”
SB 243 goes into effect on January 1, 2026, and requires businesses to implement certain features such as age verification and warnings about social media and companion chatbots. The law also establishes stronger penalties for those who profit from illegal deepfakes, including fines of up to $250,000 for each violation. Businesses will also have to establish protocols for dealing with suicide and self-harm, which will be shared with state public health departments along with statistics on how services provide crisis center prevention notifications to users.
According to the bill’s text, platforms must also make clear that any interactions are artificially generated, and chatbots must not claim to be medical professionals. Companies are required to provide reminders to minors to take breaks and prevent minors from viewing sexually explicit images generated by chatbots.
Some companies have already started implementing some safety measures aimed at children. For example, OpenAI recently began rolling out a parental control, content protection, and self-harm detection system for children using ChatGPT. Character AI said its chatbot includes a disclaimer that all chats are generated by AI and are fictitious.
tech crunch event
san francisco
|
October 27-29, 2025
Sen. Padilla told TechCrunch that the bill is a “step in the right direction” to put guardrails on “incredibly powerful technology.”
“We have to act quickly to seize the opportunity before it disappears,” Padilla said. “I want other states to be aware of the risks. I think a lot of states are aware. I think this is being discussed across the country, but I hope people will take action. Certainly not the federal government, but I think we have an obligation to protect the most vulnerable here.”
SB 243 is the second significant AI regulation enacted in California in recent weeks. On September 29, Governor Newsom signed SB 53, creating new transparency requirements for large AI companies. The bill would require large AI labs such as OpenAI, Anthropic, Meta, and Google DeepMind to be transparent about their safety protocols. It also ensures whistleblower protection for employees of these companies.
Other states, such as Illinois, Nevada, and Utah, have passed laws restricting or completely banning the use of AI chatbots as a substitute for licensed mental health care.
TechCrunch has reached out to Character AI, Meta, OpenAI, and Replika for comment.
This article has been updated with comments from Senator Padilla.