Helen Toner, a former OpenAI board member and director of strategy at Georgetown’s Center for Security and Emerging Technologies, worries that Congress could react in a “crazy” way when it comes to AI policymaking if the status quo doesn’t change.
“Congress right now — I don’t know if anyone’s noticed — is not very functional, not very good at passing laws unless there’s a huge crisis,” Toner said at TechCrunch’s StrictlyVC event in Washington on Tuesday. it’s a big, powerful piece of technology — something is going to go wrong at some point. And if the only laws we get are made in a jerky way, as a reaction to a major crisis, is that going to be productive?”
Toner’s comments, which come ahead of Thursday’s White House-sponsored summit on ways to use artificial intelligence to support American innovation, underscore the long-standing impasse in U.S. AI policy.
In 2023, President Joe Biden signed an executive order that implemented certain consumer protections related to artificial intelligence and required developers of artificial intelligence systems to share the results of security tests with relevant government agencies. Earlier that year, the National Institute of Standards and Technology, which sets federal technology standards, released a roadmap for identifying and mitigating emerging AI risks.
But Congress has yet to pass AI legislation — or even I suggest any law as comprehensive as regulations, such as the recently enacted EU Law on Artificial Intelligence. And with 2024 an important election year, that’s unlikely to change anytime soon.
As a report from the Brookings Institution notes, the gap in federal rulemaking has led to a rush to fill the void by state and local governments. In 2023, state lawmakers introduced more than 440% more AI-related bills than in 2022. Nearly 400 new state-level AI laws have been proposed in recent months, according to lobbying group TechNet.
California lawmakers last month introduced about 30 new AI bills aimed at protecting consumers and jobs. Colorado recently passed a measure requiring AI companies to use “reasonable care” when developing the technology to avoid discrimination. And in March, Tennessee Gov. Bill Lee signed into law the ELVIS Act, which prohibits the cloning of voices or likenesses of musicians with artificial intelligence without their express consent.
The patchwork of rules threatens to heighten uncertainty for both industry and consumers.
Consider this example: in many state laws regulating artificial intelligence, “automated decision making” — a term that broadly refers to artificial intelligence algorithms that make some kind of decision, such as whether a business gets a loan — is defined differently. Some laws do not consider decisions to be “automated” as long as they are made with some level of human involvement. Others are more strict.
Toner believes even a high-level federal mandate would be preferable to the current state of affairs.
“Some of the smartest and most thoughtful actors I’ve seen in this space are trying to say, okay, what are the nice little — pretty common sense — guardrails that we can put in place now to make future crises — future big problems — probably less severe, and basically make it less likely that you’ll end up needing some sort of rushed and poorly thought-out response later on,” he said.