Leading U.S. tech companies, including OpenAI, Google, Microsoft, Amazon, Anthropic, Inflection AI, and Meta, have pledged to adopt a set of voluntary safeguards to ensure the safe, secure, and transparent development of AI technology, the White House announced on Friday. The commitments are divided into three categories: Safety, Security, and Trust, and apply to generative models that surpass the current industry frontier.
Safety Commitments
The companies have agreed to conduct internal and external red-teaming of models or systems to assess misuse, societal risks, and national security concerns, such as bio, cyber, and other safety areas. They will also work towards sharing information among companies and governments regarding trust and safety risks, dangerous or emergent capabilities, and attempts to circumvent safeguards. Furthermore, they will publicly report model or system capabilities, limitations, and domains of appropriate and inappropriate use, including discussions on societal risks, such as effects on fairness and bias. Prioritizing research on societal risks posed by AI systems, including on avoiding harmful bias and discrimination, and protecting privacy, is also part of their safety commitment.
Security Commitments
On the security front, the companies have committed to investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights. They will also incentivize third-party discovery and reporting of issues and vulnerabilities.
Trust Commitments
In terms of trust, the companies will develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated, including robust provenance, watermarking, or both, for AI-generated audio or visual content. They have also pledged to develop and deploy frontier AI systems to help address society’s greatest challenges.
Future Steps
These voluntary commitments are seen as a first step in developing and enforcing binding obligations to ensure safety, security, and trust in AI. The Biden-Harris Administration will continue to take executive action and pursue bipartisan legislation to help America lead the way in responsible innovation and protection. They will also work with allies and partners on a strong international code of conduct to govern the development and use of AI worldwide.
Company Perspectives
Microsoft has expressed strong support for the new voluntary commitments, emphasizing the importance of ensuring that AI systems are safe, secure, and trustworthy. Google has emphasized its commitment to using AI to address societal challenges, such as forecasting floods and improving healthcare. OpenAI has reiterated its mission to build safe and beneficial AGI (Artificial General Intelligence), emphasizing the importance of piloting and refining governance practices tailored to highly capable foundation models.
The commitments underscore three principles fundamental to the future of AI: security, safety, and trust. However, the White House’s announcement contained few concrete details of what the companies will actually be expected to do. It remains unclear how the White House will hold companies accountable, as the scheme is voluntary and the announcement did not include an enforcement mechanism. Despite these uncertainties, the White House stated that these voluntary commitments mark a critical step towards developing responsible AI.
Last Updated on November 8, 2024 12:20 pm CET