HomeWinBuzzer NewsTech Giants and AI Firms Pledge AI Safety Measures at Seoul Summit

Tech Giants and AI Firms Pledge AI Safety Measures at Seoul Summit

The Seoul Summit has introduced the Frontier AI Safety Commitments, which require participating companies to publish safety frameworks


Sixteen prominent AI companies, including , , IBM, and , have agreed to deactivate their technologies if they exhibit signs of causing harmful outcomes. This commitment was made during the AI Seoul Summit 2024 in South Korea, a significant event following last year's Summit. The previous summit resulted in The Bletchley Declaration, signed by 28 nations and the EU, which outlined a vision for managing AI risks without binding commitments.

Frontier AI Safety Commitments

The Seoul Summit has introduced the Frontier AI Safety Commitments, which require participating companies to publish safety frameworks. These frameworks will detail how they plan to measure and manage the risks associated with their AI models. Companies must specify when risks become unacceptable and outline the actions they will take in such scenarios. If risk mitigations fail, the signatories have pledged to halt the development or deployment of the problematic AI model or system.

The signatories have committed to several initiatives, including red-teaming their AI models, sharing information, investing in , and incentivizing third-party vulnerability reporting. They have also pledged to label AI-generated content, prioritize research on societal risks, and use AI to address global challenges.

Among the signatories are OpenAI, Microsoft, Amazon, Anthropic, Cohere, G42, Inflection AI, Meta, Mistral AI, Naver, Samsung Electronics, Technology Innovation Institute, xAI, and Zhipu.ai. The specifics of these commitments are expected to be finalized at the “AI Action Summit” scheduled for early 2025.

The organizations have agreed to the following Frontier AI Safety Commitments:

“Outcome 1. Organisations effectively identify, assess and manage risks when developing and deploying their frontier AI models and systems. They will:

I. Assess the risks posed by their frontier models or systems across the AI lifecycle, including before deploying that model or system, and, as appropriate, before and during training. Risk assessments should consider model capabilities and the context in which they are developed and deployed, as well as the efficacy of implemented mitigations to reduce the risks associated with their foreseeable use and misuse. They should also consider results from internal and external evaluations as appropriate, such as by independent third-party evaluators, their home governments[footnote 2], and other bodies their governments deem appropriate.

II. Set out thresholds[footnote 3] at which severe risks posed by a model or system, unless adequately mitigated, would be deemed intolerable. Assess whether these thresholds have been breached, including monitoring how close a model or system is to such a breach. These thresholds should be defined with input from trusted actors, including organisations' respective home governments as appropriate. They should align with relevant international agreements to which their home governments are party. They should also be accompanied by an explanation of how thresholds were decided upon, and by specific examples of situations where the models or systems would pose intolerable risk.

III. Articulate how risk mitigations will be identified and implemented to keep risks within defined thresholds, including safety and security-related risk mitigations such as modifying system behaviours and implementing robust security controls for unreleased model weights.

IV. Set out explicit processes they intend to follow if their model or system poses risks that meet or exceed the pre-defined thresholds. This includes processes to further develop and deploy their systems and models only if they assess that residual risks would stay below the thresholds. In the extreme, organisations commit not to develop or deploy a model or system at all, if mitigations cannot be applied to keep risks below the thresholds.

V. Continually invest in advancing their ability to implement commitments i-iv, including risk assessment and identification, thresholds definition, and mitigation effectiveness. This should include processes to assess and monitor the adequacy of mitigations, and identify additional mitigations as needed to ensure risks remain below the pre-defined thresholds. They will contribute to and take into account emerging best practice, international standards, and science on AI risk identification, assessment, and mitigation.

Outcome 2. Organisations are accountable for safely developing and deploying their frontier AI models and systems. They will:

VI. Adhere to the commitments outlined in I-V, including by developing and continuously reviewing internal accountability and governance frameworks and assigning roles, responsibilities and sufficient resources to do so.

Outcome 3. Organisations' approaches to frontier AI safety are appropriately transparent to external actors, including governments. They will:

VII. Provide public transparency on the implementation of the above (I-VI), except insofar as doing so would increase risk or divulge sensitive commercial information to a degree disproportionate to the societal benefit. They should still share more detailed information which cannot be shared publicly with trusted actors, including their respective home governments or appointed body, as appropriate.

VIII. Explain how, if at all, external actors, such as governments, civil society, academics, and the public are involved in the process of assessing the risks of their AI models and systems, the adequacy of their safety framework (as described under I-VI), and their adherence to that framework.”

Global Cooperation and Future Plans

In a jointly written op-ed, UK Prime Minister Rishi Sunak and South Korean President Yoon Suk Yeol emphasized the urgency of accelerating efforts in AI governance. The Seoul Summit also saw the adoption of the Seoul Declaration, which highlights the importance of interoperability between AI governance frameworks to maximize benefits and mitigate risks. This declaration was endorsed by representatives from the G7, Singapore, Australia, the UN, the OECD, and the EU, alongside industry leaders.

Markus Kasanmascheff
Markus Kasanmascheff
Markus is the founder of WinBuzzer and has been playing with Windows and technology for more than 25 years. He is holding a MasterĀ“s degree in International Economics and previously worked as Lead Windows Expert for Softonic.com.

Recent News