California Governor Gavin Newsom undersigned A milestone bill on Monday regulated by AI Companion Chatbots, making the first state in the nation to demand AI Chatbot operators to apply security protocols for AI comrades.
The law, SB 243, is designed to protect children and vulnerable users from some of the lesions associated with the use of AI Companion Chatbot. It has companies – from large laboratories such as Meta and Openai to more focused newly established companies such as AI character and replika – legally responsible if their chatbots do not meet the status of the law.
The SB 243 was introduced in January by state senators Steve Padilla and Josh Becker and won a momentum after the death of teenager Adam Raine, who died of suicide after a long series of suicidal talks with Openai’s chatgpt. The legislation also responds to leaky internal documents that reportedly showed that Meta chatbots were able to participate in “romantic” and “sensual” conversations with children. More recently, A Colorado family has deposited a suit Against the character of AI roles after her 13 -year -old daughter took her own life after a series of problematic and sexual conversations with the company’s chatbots.
“Emerging technology, such as chatbots and social media, can inspire, train and connect – but without real prophylactic, technology can also exploit, mislead and endanger our children,” Newsom said in a statement. “We have seen some truly frightening and tragic examples of young people hurt by unjusting technology and we will not stand, and companies continue without the necessary limits and accountability, we can continue to lead to AI and technology, but we must do it responsibly – protecting our children every step.
The SB 243 will enter into force on 1 January 2026 and requires companies to implement certain features such as age verification and social media warnings and chatbots companies. The law also applies stronger penalties for those who benefit from illegal bases, including up to $ 250,000 per offense. Companies must also create protocols for tackling suicide and self -injury, which will be shared with the Ministry of Public Health of the State alongside statistics on how the service provided users alerts to prevent the Crisis Center.
According to the language of the bill, platforms must also be made clear that interactions are artificially produced and chatbots should not be represented as health professionals. Companies are required to offer reminders of a break to minors and prevent them from seeing sexually clear images produced by Chatbot.
Some companies have already begun to implement certain assurances to children. For example, Openai has recently begun to release parental control, content protection and self -injury detection system for children using Chatgpt. The AI character said his chatbot involves a denial that all conversations are created and fantastic.
TechCrunch event
Francisco
|
27-29 October 2025
Senator Padilla told TechCrunch that the bill was “one step in the right direction” in the direction of placing protective messages in “an incredibly powerful technology”.
“We have to move quickly so as not to miss the windows of the opportunity before they disappear,” Padilla said. “I hope that other states will see the danger, I think many do, I think this is a debate that is happening all over the country and I hope people take action, certainly the federal government has no and I think we have an obligation here to protect the most vulnerable people among us.”
SB 243 is the second major AI regulation to come out of California in recent weeks. On September 29, the Governor Newsom signed the SB 53 law, creating new transparency requirements for large AI companies. The bill states that large AI laboratories, such as Openai, Anthropic, Meta and Google Deepmind, are transparent about safety protocols. It also ensures protection of complaints for employees in these companies.
Other states, such as Illinois, Nevada and Utah, have passed laws to restrict or prohibit the use of the use of AI Chatbots as a substitute for authorized mental health care.
TechCrunch reached the character of AI, Meta, Openai and Replika for comments.
This article has been informed by comments by Senator Padilla.
