HomeWinBuzzer NewsNew NSA Cybersecurity Information Sheet Targets AI System Security

New NSA Cybersecurity Information Sheet Targets AI System Security

The NSA released cybersecurity guidelines to improve AI security, especially for defense contractors.

-

The National Security Agency (NSA) has unveiled a comprehensive set of guidelines aimed at bolstering the security of Artificial Intelligence (AI) systems within organizations, particularly those involved in the defense industry. The guidance, encapsulated in a Cybersecurity Information Sheet (CSI) titled “Deploying AI Systems Securely: Best Practices for Deploying Secure and Resilient AI Systems,” marks a significant initiative by the NSA’s Artificial Intelligence Security Center (AISC). Established in the previous fall as a component of the Cybersecurity Collaboration Center (CCC), the AISC’s mission is to foster collaboration between the government and the industry to safeguard the Defense Industrial Base.

The Need for Specialized AI Security Measures

The NSA’s guidance underscores the unique security challenges posed by AI systems, which are susceptible to a range of attack vectors distinct from traditional IT systems. According to the CSI, “Malicious actors targeting AI systems may use attack vectors unique to AI systems, as well as standard techniques used against traditional IT.” This differentiation is crucial as AI systems can be compromised through adversarial machine learning attacks aimed at altering algorithmic behavior, generative AI attacks designed to bypass safety mechanisms, and supply chain attacks that, while similar to those affecting software, have unique implications for AI. A report by security vendor Hidden Layer highlights the urgency of addressing these vulnerabilities, revealing that 77 percent of companies reported breaches to their AI systems in the past year.

Implementing the Guidelines

The NSA’s guidance emphasizes a proactive and comprehensive approach to AI system security, advocating for continuous monitoring and validation of AI systems before and during their deployment. Key recommendations include securing exposed APIs, actively monitoring model behavior, safeguarding model weights, enforcing strict access controls, and conducting regular user training, audits, and penetration testing. The CSI stresses that securing AI systems is an ongoing process that requires organizations to identify risks, implement appropriate mitigations, and monitor for potential issues continuously. By adhering to these practices, organizations can significantly mitigate the risks associated with deploying and operating AI systems.

Last Updated on November 7, 2024 8:54 pm CET

SourceNSA
Luke Jones
Luke Jones
Luke has been writing about Microsoft and the wider tech industry for over 10 years. With a degree in creative and professional writing, Luke looks for the interesting spin when covering AI, Windows, Xbox, and more.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
We would love to hear your opinion! Please comment below.x
()
x
Mastodon