1. Leading AI companies agree to new voluntary safety commitments at AI summit in Seoul, including Amazon, Google, Meta, Microsoft, OpenAI, xAI, and Zhipu AI.
2. The commitments include pledges to not develop or deploy AI models with severe risks, as well as publishing methods to measure and mitigate risks associated with AI models.
3. Nations including the UK, US, EU, and others establish an international network of publicly backed AI Safety Institutes, with notable participation from China and discussions on AI safety ongoing.
Prominent AI companies have agreed to a new set of safety commitments announced by the UK and South Korean governments before a two-day AI summit in Seoul. Sixteen tech companies, including Amazon, Google, Meta, Microsoft, OpenAI, xAI, and Zhipu AI, have opted into the framework, representing regions across North America, Europe, the Middle East, and Asia. These commitments include pledges not to develop or deploy models with unmanageable risks, as well as a commitment to measure and mitigate risks associated with AI models.
The new safety commitments come following recommendations made by eminent AI researchers in a paper published in Science, which emphasized oversight, honesty, robustness, interpretability, and transparency, among other factors. The vice president of global affairs at OpenAI, Anna Makanju, expressed support for the commitments, highlighting the importance of collaborating to ensure AI is safe and beneficial for humanity. Michael Sellitto, Head of Global Affairs at Anthropic, also endorsed the commitments, emphasizing the importance of responsible AI development and deployment.
These new commitments by AI companies reflect similar voluntary commitments made at the White House last year to encourage safe, secure, and transparent AI technology development. While there is no enforcement mechanism for these commitments, they are seen as laying the foundation for potential domestic regulation in the future. The sentiment behind these commitments may not yet align with actions, but the establishment of an international network of publicly backed “AI Safety Institutes” by ten nations and the EU shows progress towards international cooperation on AI safety science. China, despite not being a signatory to the agreement, has expressed a willingness to cooperate on AI safety and has been engaged in discussions with the US on the topic.