White House Signs 8 More Companies to Safe AI Pledge

The recent action by the White House extends the Biden-Harris Administration's initiatives to mitigate AI risks and capitalize on its advantages.

The White House has announced that eight more tech companies, including Adobe, IBM, Nvidia, Cohere, Palantir, Salesforce, Scale AI, and Stability AI, have pledged their commitment to the development of safe, secure, and trustworthy artificial intelligence (AI). This move builds upon the Biden-Harris Administration’s efforts to manage AI risks and harness its benefits. 

In July when the initiative was launched, Leading U.S. , including AnthropicInflection AI, and Meta, agreed to the voluntary safeguard. The commitments are divided into three categories: Safety, Security, and Trust, and apply to generative models that surpass the current industry frontier.

Key Commitments Outlined

The companies have agreed to a series of measures aimed at ensuring the responsible development and deployment of AI. These include:

  • Conducting both internal and external security testing of AI systems before their release.
  • Sharing information across the industry and with governments, civil society, and academia on managing AI risks.
  • Investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights.
  • Facilitating third-party discovery and reporting of vulnerabilities in their AI systems.
  • Developing mechanisms to ensure users can identify AI-generated content, such as watermarking.
  • Publicly reporting their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use.
  • Prioritizing research on societal risks posed by AI, including harmful bias, discrimination, and privacy concerns.
  • Developing and deploying AI systems to address society’s major challenges.

International Collaboration and Broader Initiatives

The Biden-Harris Administration has also been actively collaborating with international partners, including countries like Australia, Brazil, Canada, Germany, India, Japan, and the UK, among others. These collaborations aim to develop a unified approach to AI safety and regulation. The Administration is also working on an Executive Order on AI to further protect Americans’ rights and safety.

In addition to these commitments, the Biden-Harris Administration has taken several steps to ensure AI’s responsible development. This includes launching the “AI Cyber Challenge,” convening meetings with top AI experts and company CEOs, publishing a blueprint for an AI Bill of Rights, and investing in National AI Research Institutes.

“So long as we are thoughtful and measured, we can ensure safe, trustworthy, and ethical deployment of AI systems,” NVIDIA Chief Scientist William Dally mentioned in a recent Senate testimony. He emphasized the balance between AI regulation, national security considerations, and the potential misuse of AI technology.

Last Updated on November 18, 2024 11:38 am CET

Luke Jones
Luke Jones
Luke has been writing about Microsoft and the wider tech industry for over 10 years. With a degree in creative and professional writing, Luke looks for the interesting spin when covering AI, Windows, Xbox, and more.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
We would love to hear your opinion! Please comment below.x
()
x