In a new report, a California-based policy team co-existed by Fei-Fei Li, pioneer AI, suggests that legislators should consider the dangers of AI that “have not yet been observed in the world” when constructing the regulatory policies of the AI.
THE Intermediate Report of 41 Pages Tuesday’s circulation comes from the joint working group of California’s policy on AI Frontier models, an effort organized by Governor Gavin Newsom after the AI’s disputed bill, SB 1047.
In the report, Li, along with UC Co-Authors of UC Berkeley College of Computing Dean Jennifer Chayes and Carnegie Endowment for International Peace President Mariano-Florentino Cuéllar, support in favor of laws that will increase transparency in what the frontier Ai Labs are constructed. Openai. Interested industry stakeholders from all over the ideological spectrum reviewed the report before its publication, including Staunch AI supporters, such as the winner of the Turing Yoshua Benjio award as well as those who supported SB 1047, such as the co -founder of Databricks Ion Stoica.
According to the report, the new risks posed by AI systems can require laws that will force AI model developers to publicly report their security tests, data acquisition practices and security measures. The report also supports the increased standards around the evaluations of third parties of these measurements and corporate policies, in addition to the extensive protection of complainants for employees and contractors of AI.
Li et al. Write that there is a “vague level of evidence” for AI’s ability to help run cyberettacks, create biological weapons or bring other “extreme” threats. They also argue, however, that the policy of AI should not only face the current dangers, but to predict future consequences that may arise without sufficient safeguards.
“For example, we don’t need to notice a nuclear weapon [exploding] To reliably predict that it could and would cause extensive damage, “the report said.” If those who speculate on the most extreme risks are right – and we are uncertain if they will be – then the bets and expenses for inactivity at the AI border at this time are extremely high. “
The report is a two -level strategy to enhance the AI models transparency transparency: Trust but verifies. AI model developers and their employees should be provided roads to report on public sectors, according to the report, such as internal security tests, while at the same time having to submit trial claims for third party verification.
While the report, the final version of which is expected in June 2025, does not support specific legislation, has been well received by experts on both sides of the debate on the order of the policy of the AI.
Dean Ball, a researcher focusing on AI at George Mason University, criticized by SB 1047, said in a position in X that the report was a very promising step For the California Security Regulation. It is also a victory for AI security supporters, according to California Senator Scott Wiener, who introduced SB 1047 last year. Wiener said in a press release that the report is based on “emergency talks around AI’s governance we started in the legislative body [in 2024]. ”
The report appears to be aligned with various elements of SB 1047 and Wiener Bill, SB 53, such as the requirement of AI model developers to report the results of the security tests. Taking a wider view, it seems to be a very necessary victory for AI Safety Folks, whose agenda has lost the ground last year.