Government and artificial intelligence industry officials agreed on Tuesday to implement rudimentary security measures in the fast-moving field and create an international security research network.
Almost six months after the inaugural Global AI Security Summit at Bletchley Park in England, Britain and South Korea are hosting the AI Security Summit this week in Seoul. The gathering highlights the new challenges and opportunities the world faces with the advent of AI technology.
The British government announced on Tuesday New agreement between 10 countries and the European Union to create an international network similar to the UK’s AI Security Institute, which is the world’s first publicly supported organization, to accelerate the advancement of AI security science. The network will promote a common understanding of AI security and align its work with research, standards and testing. Australia, Canada, the EU, France, Germany, Italy, Japan, Singapore, South Korea, the UK and the US have signed the agreement.
On the first day of the AI Summit in Seoul, world leaders and leading AI companies gathered in a virtual meeting chaired by UK Prime Minister Rishi Sunak and South Korean President Yoon Suk Yeol to discuss security, innovation and the integration of AI.
During the discussions, the leaders agreed on the broadest Seoul Declarationemphasizing increased international cooperation to build artificial intelligence to address major global issues, defend human rights and bridge digital divides worldwide, all while prioritizing being “human-centric, trustworthy and responsible”.
“Artificial intelligence is a hugely exciting technology — and the UK has led global efforts to address its potential, hosting the world’s first AI Security Summit last year,” Sunak said in a UK government statement . “But to have the upper hand, we have to make sure it’s safe. That is why I am delighted that we have today reached an agreement on a network of AI Security Institutes.”
Just last month, the UK and US sealed a memorandum of cooperation to collaborate on AI security research, security assessment and guidance.
The following is the agreement announced today the world’s first AI Security Commitments from 16 AI companies, including Amazon, Anthropic, Cohere, Google, IBM, Inflection AI, Meta, Microsoft, Mistral AI, Open AI, Samsung Electronics, Technology Innovation Institute, xAi and Zhipu.ai. (Zhipu.ai is a Chinese company backed by Alibaba, Ant and Tencent.)
AI companies, including those from the US, China and the United Arab Emirates (UAE), have agreed to security pledges not to develop or deploy a model or system at all if mitigations cannot keep risks below limits . in the UK Government announcement.
“It’s a world first to have so many leading AI companies from so many different parts of the world all agreeing to the same commitments to AI security,” Sunak said. “These commitments ensure that the world’s leading AI companies provide transparency and accountability in their plans to develop safe AI.”