California has taken a big step towards adjusting the AI. SB 243 – A bill that will regulate AI’s chatbots to protect minors and vulnerable users – has passed both assembly and the Senate with diplomatic support and is now heading for Governor Gavin Newsom’s office.
The Newsom has until October 12 either to veto the bill or sign it in law. If it signs, it will enter into force on 1 January 2026, making California the first state to require AI Chatbot operators to apply security protocols for AI comrades and keep companies legally responsible if their chatbots do not meet these standards.
The bill specifically aims at preventing Chatbots comrades, which the law defines AI systems that provide adaptive, human reactions and are able to meet the social needs of a user-from participating in talks about self-destruction, self-injury or sexual content. The bill would require platforms to provide repetitive alerts to users – every three hours for minors – reminding them that they are talking to an AI Chatbot, not a real person and that they should take a break. It also creates annual reference and transparency requirements for AI companies that offer companies of chatbots, including important Openai players, character.Ai and Replika, which will come into force on July 1, 2027.
The California bill would also allow people who believe that they have been injured by violations of conducting pipelines against AI companies seeking suspension, compensation (up to $ 1,000 per violation) and a lawyer’s fees.
The bill acquired a dynamic in California’s legislative body after the death of teenager Adam Raine, who committed suicide after prolonged conversations with the Openai chatgpt that included the discussion and planning of his death and self -injury. The legislation also responds to leaky internal documents that reportedly showed that Meta chatbots were able to participate in “romantic” and “sensual” conversations with children.
In recent weeks, US legislators and regulators have responded with an intensified examination of AI platform guarantees to protect minors. THE Federal Committee Prepares to explore how AI Chatbots affect children’s mental health. Texas Attorney General Ken Paxton has begun Meta and character investigations, accusing them of misleading children with mental health claims. Meanwhile, both Senator Josh Hawley (R-Mo) and Sen. Ed markey (D-MA) have started separate Meta detectors.
“I think the damage is potentially great, which means we have to move quickly,” Padilla told TechCrunch. “We can put logical safeguards to make sure that the minors know that they do not speak to a real man, that these platforms connect people to the right resources when people say things as if they were thinking of hurting or are in agony, [and] To make sure there is no inappropriate exposure to inappropriate material. ”
TechCrunch event
Francisco
|
27-29 October 2025
Padilla also emphasized the importance of AI companies that share data on the number of times referring to users in crisis services each year, “So we have a better understanding of the frequency of this problem, rather than only when one is hurt or worse.”
The SB 243 had previously stronger demands, but many had overcome the modifications. For example, the bill would initially require exploitation bodies from preventing AI Chatbots from using regular “variable reward” or other features that encourage excessive commitment. These tactics, used by AI Companion companies such as Replika and character, offer users special messages, memories, stories or the ability to unlock rare answers or new personalities, creating what critics call potentially addictive reward loop.
The current bill also abolishes provisions that would require operators to monitor and report how often the chatbots have started suicidal ideal discussions or actions with users.
“I think it hits the right balance to get to the damage without imposing something that is either impossible for companies to comply or because it is not technically feasible or just a lot of papers for nothing,” Becker told TechCrunch.
The SB 243 is moving towards the law at a time when Silicon Valley companies are losing millions of dollars in Pro-Ai Political Action Committees to support candidates in the upcoming intermediate elections that prefer an AI light axle approach.
The bill also comes as California weighs another AI security bill, SB 53, which will determine the overall transparency reference requirements. Openai has written an open letter to the Newsom Governor, asking him to abandon the bill in favor of the less strict federal and international framework. Large technology companies such as Meta, Google and Amazon also opposed SB 53. On the contrary, only Anthropic said he supports SB 53.
“I reject the condition that it is a zero situation, that innovation and regulation are mutually exclusively,” Padilla said. “Don’t tell me that we can’t walk and chew gum. We can support the innovation and development we believe is healthy and has benefits – and there are benefits for this technology, and at the same time we can provide reasonable safeguards for the most vulnerable people.”
“We closely monitor the legislative and regulatory landscape and welcome cooperation with regulators and legislators, as they begin to consider legislation for this emerging space,” the character spokesman said.
A Meta spokesman refused to comment.
TechCrunch arrived at Openai, Anthropic and Replika for comments.
