Monday, human announced A formal approval of SB 53, a California bill by the Seniator of the SCOTT Wiener state that will impose the requirements of the world’s first nations on the largest AI model developers in the world. Anthropic’s approval marks a rare and great victory for SB 53, at a time when large technology groups such as CTA and Chamber for progress are pressing against the bill.
“While we believe that the safety of Frontier AI is better treated at federal level instead of a patchwork of state regulations, AI’s strong developments will not expect consensus in Washington.” the man said in a blog post. “The question is not if we need AI’s governance – it is whether we will carefully develop it today or reactive tomorrow. SB 53 offers a fixed path to the first.”
If passed, Sb 53 It would require AI model developers, such as Openai, Anthropic, Google and XAI for the development of security framework, as well as release public security and security reports before the development of strong AI models. The bill will also create complaint protection to employees arising from security concerns.
The Wiener Senator’s bill specifically focuses on limiting AI models from contributing to “destructive dangers”, which the bill sets at least 50 people or more than a billion dollars in compensation. The SB 53 focuses on the extreme side of the AI risk-rendering AI models from being used to provide assistance at the level of experts in the creation of biological weapons or used in cyberattacks-and not more short-term concerns such as AI Deepfakes or Sycophancy.
The California Senate approved a previous version of the SB 53, but it must still carry out a final vote on the bill before it could proceed to the governor’s office. Governor Gavin Newsom has remained silent on the bill so far, although ventilating the latest Senator Weiner Security Bill, SB 1047.
Accounts that regulate AI models with borders have faced a significant impetus from both Silicon Valley and Trump’s administration, which argue that these efforts could limit America’s innovation in the fight against China. Investors, such as Andreessen Horowitz and Y Combinator, have led to some of the impulse against SB 1047 and in recent months, Trump’s administration has repeatedly threatened to block states from the transmission of AI Regulation completely.
One of the most common arguments against AI’s security accounts is that states have to leave the matter up to federal governments. Andreessen Horowitz’s head of Policy AI, Matt Perault and Chief Legal Officer Jai Ramaswamy posted one blog Last week, arguing that many of today’s AI state accounts are in danger of violating the Constitution clause – which restricts state governments to pass laws that go beyond their borders and reduce transnational trade.
TechCrunch event
Francisco
|
27-29 October 2025
However, human co -founder Jack Clark supports in a Post in x That the technology industry will create powerful AI systems in the coming years and cannot expect the federal government to act.
“We have long said that we would prefer a federal standard,” Clark said. “But, in the absence of this, this creates a compact plan for the AI governance that cannot be ignored.”
Openai’s World Affairs Leader Chris Lehane sent one letter To the Newsom Governor in August, arguing that no AI regulation should pass that would give the newly established businesses from California – although the letter did not mention SB 53 by the name.
Former Head of Openai Policy Research, Miles Brundage, told a position In X that Lehane’s letter “was full of misleading rubbish for SB 53 and AI policy in general”. Specifically, the SB 53 aims to only regulate AI companies in the world – especially those that have generated gross revenue of more than $ 500 million.
Despite criticism, policy experts say SB 53 is a more moderate approach than AI’s previous security accounts. Dean Ball, a senior associate of the American Innovation Foundation and former White House Policy Advisor, said in an August blog That he believes that SB 53 has a good chance now to become a law. Ball, who criticized SB 1047, said the authors of SB 53 “showed respect for technical reality” as well as a “legislative measure”.
Senator Wiener said in the past that SB 53 was largely influenced by a ruler of the Committee on Newsom Specialist-co-authored by Stanford’s leading researcher and co-founder of World Labs, Fei-Fei Li-California California.
Most AI workshops already have some version of the internal security policy required by SB 53. Openai, Google Deepmind and Anthroping regularly post security reports for their models. However, these companies are not bound by anyone, but they do so, and sometimes fall behind their self -imposed security commitments. SB 53 aims to determine these requirements as state law, with financial implications if an AI laboratory does not comply.
Earlier in September, California’s legislators modified SB 53 to remove a portion of the bill that would require AI model developers from third parties. Technology companies have fought these types of third -party controls in other policy battles of AI, arguing that they are overly burdensome.
