HomeWinBuzzer NewsMicrosoft Security Copilot Brings OpenAI’s GPT-4 to the Cybersecurity Battlefield

Microsoft Security Copilot Brings OpenAI’s GPT-4 to the Cybersecurity Battlefield

Microsoft Security Copilot is a GPT-4 AI-driven service that leverages intelligence to give businesses a head start against cybercrime.

-

Microsoft's AI surge in recent months has seen the company adopt intelligent models across its ecosystem. By leveraging GPT-4 and ChatGPT from OpenAI, the company has delivered AI to search through Bing Chat, cloud with Azure OpenAI Serivce, and Office via Microsoft 365 Copilot. Now the company is turning to AI to combat cybercrime with today's launch of Microsoft Security Copilot.

Announced at the inaugural Microsoft Secure event, the company revealed that Microsoft Security Copilot is driven by the generative multimodal AI. In an accompanying blog post, the company says the new platform will help meet cybercriminals head on, giving organizations a head start instead of constantly playing catch-up to threat actors.

“Today the odds remain stacked against cybersecurity professionals. Too often, they fight an asymmetric battle against prolific, relentless and sophisticated attackers. To protect their organizations, defenders must respond to threats that are often hidden among noise. Compounding this challenge is a global shortage of skilled security professionals, leading to an estimated 3.4 million openings in the field.”

Security Copilot will pick up the slack of the skills shortage while also provide scalable and evolving end-to-end defense at a machine speed level. Microsoft describes the solution as a first of its kind that leverages the ChatGPT natural language model (LLM) with Microsoft existing security-specific AI's.

“When Security Copilot receives a prompt from a security professional, it uses the full power of the security-specific model to deploy skills and queries that maximize the value of the latest large language model capabilities. And this is unique to a security use-case. Our cyber-trained model adds a learning system to create and tune new skills. Security Copilot then can help catch what other approaches might miss and augment an analyst's work.”

Microsoft admits that the system is not perfect, simply because AI is still a growing technology that makes mistakes. To overcome this problem, Microsoft Security Copilot uses a closed loop learning system. This essentially means that it constantly learns from user interactions and welcomes feedback to help it improve.

Augmenting Security

It is worth noting that Microsoft is not positioning Security Copilot as a complete replacement of current cybersecurity measures. I am sure that's the ultimate goal, but the AI is nowhere near ready for that sort of responsibility. Instead, Microsoft says the service augments current security professionals with machine scale and speed.

The company points out the system still relies on “human ingenuity” and was built on three guiding prnciples:

  • “Simplify the complex: In security, minutes count. With Security Copilot, defenders can respond to security incidents within minutes instead of hours or days. Security Copilot delivers critical step-by-step guidance and context through a natural language-based investigation experience that accelerates incident investigation and response. The ability to quickly summarize any process or event and tune reporting to suit a desired audience frees defenders to focus on the most pressing work.
  • Catch what others miss: Attackers hide behind noise and weak signals. Defenders can now discover malicious behavior and threat signals that could otherwise go undetected. Security Copilot surfaces prioritized threats in real time and anticipates a threat actor's next move with continuous reasoning based on Microsoft's global threat intelligence. Security Copilot also comes with skills that represent the expertise of security analysts in areas such as threat hunting, incident response and vulnerability management.
  • Address the talent gap: A security team's capacity will always be limited by the team's size and the natural limits of human attention. Security Copilot boosts your defenders' skills with its ability to answer security-related questions – from the basic to the complex. Security Copilot continually learns from user interactions, adapts to enterprise preferences, and advises defenders on the best course of action to achieve more secure outcomes. It also supports learning for new team members as it exposes them to new skills and approaches as they develop. This enables security teams to do more with less, and to operate with the capabilities of a larger, more mature organization.”

Capabilities

It is not just GPT-4 from OpenAI that defines Microsoft Security Copilot. The service also taps directly into Microsoft's security-specific AI and end-to-end services to deliver the following feature set:

  • “Ongoing access to the most advanced OpenAI models to support the most demanding security tasks and applications
  • A security-specific model that benefits from continuous reinforcement, learning and user feedback to meet the unique needs of security professionals;
  • Visibility and evergreen threat intelligence powered by your organization's security products and the 65 trillion threat signals Microsoft sees every day to ensure that security teams are operating with the latest knowledge of attackers, their tactics, techniques, and procedures;
  • Integration with Microsoft's end-to-end security portfolio for a highly efficient experience that builds on the security signals;
  • A growing list of unique skills and prompts that elevate the expertise of security teams and set the bar higher for what is possible even under limited resources.”

Safety

The purpose of Microsoft Security Copilot is to usher a new era of AI-driven security. But what about the other way, what is Microsoft doing to ensure the AI is ethical. This is a topical subject considering the company laid off its AI ethics team this month and has not really gone done enough to discuss the potential dangers of the AI it is releasing into the wild.

Firstly, OpenAI is an anchor to all of Microsoft AI projects in recent months. OpenAI is an org that takes AI ethics seriously and has a robust safety committments. Microsoft remains kind of vague on Security Copilot, merely saying is is sticking to it commitment for “impactful and responsible AI practices by innovating responsibly, empowering others, and fostering positive impact.”

The company does point out that users of the service can control how and when their data is used. Furthermore, Microsoft claims that user data is not used to teach the AI model.

Microsoft's AI Surge

In recent months, artificial intelligence (AI) has gone through a mainstream explosion. This has mostly been driven by Microsoft and OpenAI, long term partners through Microsoft's multi-billion-dollar investments. AI, we are often told, will transform our lives and make them fundamentally better. For the most part, Microsoft's embrace of AI in recent months has pushed the needle in that direction, but only slightly.

Using AI in cybersecurity is an area where the technology can be quickly transformative. That is why Microsoft 365 Copilot is an intriguing use of GPT-4.

Tip of the day: To prevent attackers from capturing your password, Secure Sign-in asks the user to perform a physical action that activates the sign-in screen. In some cases, this is a dedicated “Windows Security” button, but the most common case in Windows is the Ctrl+Alt Del hotkey. In our tutorial, we show you how to activate this feature.

SourceMicrosoft
Luke Jones
Luke Jones
Luke has been writing about Microsoft and the wider tech industry for over 10 years. With a degree in creative and professional writing, Luke looks for the interesting spin when covering AI, Windows, Xbox, and more.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Mastodon