The California State Senate has recently given the final approval to a new AI security bill, SB 53, which sent it to Governor Gavin Newsom to sign or veto.
If all of this sounds familiar, this is due to the fact that the Newsom Venetia another AI security bill, also written by Senator Scott Wiener last year. However, the SB 53 is narrower than Wiener’s previous SB 1047, with a focus on the big AI companies that make more than $ 500 million in annual revenue.
I have the opportunity to discuss SB 53 with my colleagues Max Zef and Kirsten Korosec in the latest episode of TechCrunch’s PodCast Equity. Max believes that the new Wiener bill has a better law, partly because of this large focus of the company, and because it has been approved by AI Company Anthropic.
Read a preview of our conversation about security and state legislation below. (I have edited the copy for length and clarity and to make us sound slightly smarter.)
Max: Why do you need to care about the AI security legislation that passes a chamber in California? We are entering this time when AI companies become the most powerful companies in the world and this will be potentially one of the few controls of their power.
This is much narrower than the SB 1047, which took a lot of pushbacks last year. But I think the SB 53 still sets some important regulations in AI laboratories. It makes them post security reports for their models. If they have an incident, they basically force them to report it to the government. And for employees in these laboratories, if they have concerns, it gives them a channel to report this to the government and not to deal with the impulse from companies, even though many of them have signed NDAS.
For me, this feels like a potentially significant control of the power of technology companies, something we haven’t really had for recent decades.
TechCrunch event
Francisco
|
27-29 October 2025
Kirsten: In the context of why it matters at the state level, it is important to think of the fact that it is California. Every major AI company is pretty much, if not based here, it has an important footprint in this state. Not that other states don’t matter – I don’t want to receive emails from people in Colorado or anything – but it matters that California is specifically because it is really a node of AI activity.
My question for you, however, seems to be a lot of exceptions and chart. Is narrower but more complicated than the previous [bill];
Max: In some ways, yes. I would say that the main dependence of this bill is that it is really trying not to apply to small startups. And basically, one of the main conflicts about the latest legislative effort by Senator Scott Weiner, who represents San Francisco, who draws this bill, many people said it could harm its starting ecosystem, which many people are facing this issue.
This bill applies specifically to AI developers who are [generating] More than $ 500 million [from] their AI models. This is really trying to target Openai, Google Deepmind, these big companies and not Mill’s start.
Anthony: As I understand, if you are a smaller start, you need to share some security information, but not so much.
Of [also] It is worth talking about the wider landscape around the AI regulation and the fact that one of the big changes between last year and this year is now a new president. Federal administration has taken much more a stand -free attitude and companies should be able to do whatever they want, as long as they have really included actually [language] In financing accounts that say that states cannot have their own AI regulation.
I don’t think it has not passed so far, but they might have been able to try to do so in the future. So this could be another front on which Trump’s administration and blue states are struggling.
Equity is TechCrunch’s flagship podcast, produced by Theresa Loconsolo, and publishes every Wednesday and Friday.
Sign up to us Podcasts Apple; Cloudy; NoteAnd all the CASTS. You can also follow shares X and ThreadIn @equityPod.
