Ahead of AI Security Summit Kicking Off in Seoul, South Korea later this week, its co-host the United Kingdom is expanding its own efforts in the field. The AI Security Institute – a UK organization founded in November 2023 with the ambitious goal of assessing and addressing risks in artificial intelligence platforms – has said it will open a second location… in San Francisco.
The idea is to get closer to what is currently the epicenter of AI development, with the Bay Area home to OpenAI, Anthropic, Google, and Meta, among others building core AI technology.
Foundation models are the building blocks of productive AI services and other applications, and interestingly, although the UK has signed an MOU with the US for the two countries to work together on AI security initiatives, the UK still to choose to invest in creating a direct presence in the US to address the issue.
“By having people in San Francisco, it’s going to give them access to the headquarters of a lot of these AI companies,” Michelle Donnellan, the UK secretary of state for science, innovation and technology, said in an interview with TechCrunch. “Many of them have bases here in the UK, but we think it would be very useful to have a base there as well, and access to an additional pool of talent, and be able to work even more collaboratively and hand-in-glove with the United States.”
Part of the reason is that, for the UK, being closer to this epicenter is useful not only for understanding what is being built, but because it gives the UK more visibility with these companies – important, given that artificial intelligence and technology as a whole is seen by the UK as a huge opportunity for economic growth and investment.
And given the recent drama at OpenAI surrounding the Superalignment team, it feels like a particularly timely time to establish a presence there.
Launched in November 2023, the AI Security Institute is currently a relatively modest affair. The organization today has just 32 people working for it, a real David to the Goliath of AI technology, considering the billions of dollars of investment benefiting from companies building AI models and thus their own financial incentives to acquire the their technologies out the door and into the hands of paying users.
One of the AI Security Institute’s most notable developments was the release earlier this month of Inspect, the first set of tools for testing the security of basic AI models.
Donelan today referred to this release as a “phase one” effort. Not only has benchmarking models proven difficult to date, but engagement is currently largely an optional and inconsistent setting. As a senior source at a UK regulator pointed out, companies are under no legal obligation to check their models at this point. and not all companies are willing to test pre-release models. This could mean, in cases where danger can be identified, the horse may have already bolted.
Donelan said the AI Security Institute was still developing the best way to work with AI companies to evaluate them. “Our evaluations process is an emerging science in itself,” he said. “So with each evaluation, we will develop the process and refine it even more.”
Donelan said a goal in Seoul would be to introduce Inspect to regulators convening at the summit, with the goal of getting them to adopt it as well.
“Now we have a rating system. The second phase must also address the safety of artificial intelligence across society,” he said.
In the longer term, Donelan believes the UK will create more AI legislation, although, echoing what Prime Minister Rishi Sunak has said on the matter, he will resist doing so until the scope of AI risks is better understood.
“We don’t believe in legislation until we get it right and get it right,” he said, noting that the recent international report on AI security, published by the institute focused primarily on trying to get a comprehensive picture of the research to date, “highlights that large gaps are missing and that we need to incentivize and encourage more research globally.
“And also legislation takes about a year in the UK. And if we had just started legislation when we started instead of [organizing] the AI Security Summit [held in November last year]we would still be legislating now, and in fact we would have nothing to show for it.”
“From day one of the Institute, we have been clear about the importance of taking an international approach to AI security, sharing research and working with other countries to test models and predict the risks of frontier AI,” he said. Ian Hogarth, president of the AI Security Institute. “Today marks a pivotal moment that allows us to take this agenda forward and we’re proud to be scaling our operations in a region full of tech talent, adding to the incredible expertise our London staff have brought since the beginning.”