AI Seoul Summit: Launch of Global Network of AI Safety Institutes
World leaders agree to enhance AI safety through international cooperation
During a virtual session of the AI Safety Summit, co-hosted by South Korea and the UK, ten countries and the European Union agreed to establish a network of AI safety institutes. This international network aims to align research on machine learning standards and testing.
It will bring together scientists from publicly-backed institutions, like the UK’s AI Safety Institute, to share information about AI models' risks, capabilities, and limitations. The institutes will also monitor specific AI safety incidents.
UK Prime Minister Rishi Sunak expressed excitement about the agreement, emphasizing the importance of ensuring AI safety to maximize its benefits. Signatories to this new network include the EU, France, Germany, Italy, the UK, the United States, Singapore, Japan, South Korea, Australia, and Canada.
The UK, which claims to have created the world’s first AI Safety Institute last November with an initial investment of £100 million (€117.4 million), is among the leading countries in this initiative.
The mission of the UK’s AI Safety Institute is to minimize unexpected advances in AI. The EU, following the passage of the EU AI Act, is preparing to launch its AI office, aimed at global cooperation. Ursula von der Leyen, president of the European Commission, emphasized the global vocation of the AI office.
Leaders also signed the Seoul Declaration, underscoring the importance of international cooperation to develop trustworthy, human-centric AI. Additionally, 16 major tech companies, including OpenAI, Amazon, and Google, agreed to a set of safety commitments, such as setting thresholds for high-risk AI and ensuring transparency. France will host the next summit on safe AI use.",