HomeWinBuzzer NewsMajor Tech Companies, including Microsoft and Apple, Join Forces in U.S. AI...

Major Tech Companies, including Microsoft and Apple, Join Forces in U.S. AI Safety Institute Consortium

New White House mandated US consortium, AISIC, with tech giants aims to set AI safety standards.

-

U.S. Secretary of Commerce Gina Raimondo has announced the establishment of a consortium aimed at setting safety standards for the development of artificial intelligence (AI). The U.S. AI Safety Institute Consortium (AISIC) draws participation from over 200 entities including key industry players , , , and other tech giants.

Industry-Government Collaboration for AI Security

With the rapid advancement of AI technologies, the U.S. AI Safety Institute Consortium seeks to address the critical lack of security protocols for AI deployment. AISIC represents a collaborative effort combining the expertise of academics, government, industry researchers, and AI developers to ensure AI is developed and deployed in a responsible manner. The consortium's creation aligns with President Biden's Executive Order, which emphasizes user protection, competition, risk management, and prioritization of civil and equity rights.

The aspires to lead in mitigating the risks of AI while unlocking its potential benefits. Secretary Raimondo explains, “The U.S. government has a significant role in setting the standards…for artificial intelligence.” This consortium reflects the commitment to meet President Biden's directive to establish safety standards and protect the nation's innovation ecosystem.

Mitigating Emerging Threats and Setting Protocols

AISIC will engage in activities such as “red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content” to protect against threats that AI poses to user privacy and security. Instances of AI abuses, including deep fake audios of President Biden and cases of financial fraud facilitated by synthetic media, underscore the urgency for such measures. The Federal Trade Commission has even incentivized the identification of fake audios with a substantial reward, demonstrating the challenges in distinguishing between real and synthetic content.

Efforts from AISIC members are already visible, as Meta and OpenAI have implemented watermarks intending to discern . Mandatory watermarking on AI-generated content is a policy that Meta has recently prescribed across its platforms, including Instagram, , and Threads. The AISIC initiative is a substantial step towards addressing safety concerns in the fast-evolving landscape of AI, while seeking to maintain America's leadership in technological innovation.

Luke Jones
Luke Jones
Luke has been writing about all things tech for more than five years. He is following Microsoft closely to bring you the latest news about Windows, Office, Azure, Skype, HoloLens and all the rest of their products.

Recent News