HomeWinBuzzer NewsMajor Tech Companies, including Microsoft and Apple, Join Forces in U.S. AI...

Major Tech Companies, including Microsoft and Apple, Join Forces in U.S. AI Safety Institute Consortium

New White House mandated US consortium, AISIC, with tech giants aims to set AI safety standards.

-

U.S. Secretary of Commerce Gina Raimondo has announced the establishment of a consortium aimed at setting safety standards for the development of artificial intelligence (AI). The U.S. AI Safety Institute Consortium (AISIC) draws participation from over 200 entities including key industry players Apple, OpenAI, Microsoft, and other tech giants.

Industry-Government Collaboration for AI Security

With the rapid advancement of AI technologies, the U.S. AI Safety Institute Consortium seeks to address the critical lack of security protocols for AI deployment. AISIC represents a collaborative effort combining the expertise of academics, government, industry researchers, and AI developers to ensure AI is developed and deployed in a responsible manner. The consortium’s creation aligns with President Biden’s Executive Order, which emphasizes user protection, competition, risk management, and prioritization of civil and equity rights.

The U.S. government aspires to lead in mitigating the risks of AI while unlocking its potential benefits. Secretary Raimondo explains, “The U.S. government has a significant role in setting the standards…for artificial intelligence.” This consortium reflects the commitment to meet President Biden’s directive to establish safety standards and protect the nation’s innovation ecosystem.

Mitigating Emerging Threats and Setting Protocols

AISIC will engage in activities such as “red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content” to protect against threats that AI poses to user privacy and security. Instances of AI abuses, including deep fake audios of President Biden and cases of financial fraud facilitated by synthetic media, underscore the urgency for such measures. The Federal Trade Commission has even incentivized the identification of fake audios with a substantial reward, demonstrating the challenges in distinguishing between real and synthetic content.

Efforts from AISIC members are already visible, as Meta and OpenAI have implemented watermarks intending to discern AI-generated images. Mandatory watermarking on AI-generated content is a policy that Meta has recently prescribed across its platforms, including Instagram, Facebook, and Threads. The AISIC initiative is a substantial step towards addressing safety concerns in the fast-evolving landscape of AI, while seeking to maintain America’s leadership in technological innovation.

Last Updated on November 7, 2024 10:33 pm CET

Luke Jones
Luke Jones
Luke has been writing about Microsoft and the wider tech industry for over 10 years. With a degree in creative and professional writing, Luke looks for the interesting spin when covering AI, Windows, Xbox, and more.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
We would love to hear your opinion! Please comment below.x
()
x