Adaptive Security has successfully raised $43 million in funding to accelerate its work combating AI-driven cyber threats, with a specific focus on defending against deepfakes and social engineering attacks.
The round, co-led by OpenAI and Andreessen Horowitz (a16z), reflects the growing urgency for businesses to invest in proactive cybersecurity measures as AI technology becomes more sophisticated and accessible to cybercriminals. This investment comes at a critical time, as generative AI tools are increasingly weaponized in online scams, identity theft, and fraudulent schemes.
The Rise of AI-Enhanced Cybercrime
The emergence of AI technologies like deepfakes has drastically reshaped the cybercrime landscape. Deepfakes—highly convincing AI-generated images, audio, and videos—are being used to impersonate individuals, manipulate public opinion, and deceive victims into fraudulent transactions.
According to research, the global presence of deepfakes in cybercrime surged significantly in 2023, contributing to 7% of global fraud cases, with a tenfold increase in detected incidents from the previous year. These alarming figures highlight the vulnerability of both individuals and businesses to AI-powered fraud.
One of the most concerning aspects of deepfake technology is its ability to enable social engineering attacks. In these attacks, cybercriminals impersonate trusted figures, using convincing AI-generated media to manipulate victims. Adaptive Security’s platform is designed to address this growing threat by helping organizations detect and neutralize these attacks before they cause financial or reputational damage.
OpenAI’s involvement in this funding round is a notable step in addressing the risks posed by AI misuse. By backing Adaptive Security, OpenAI is aligning itself with efforts to safeguard against AI-driven cybercrime, an area that remains a growing concern for tech companies worldwide.
Industry Responses: Meta and Microsoft Address AI Misinformation
Tech giants like Meta and Microsoft are also taking proactive steps to combat the misuse of AI in cybersecurity. Meta’s Video Seal, an open-source watermarking framework, embeds invisible, tamper-resistant identifiers into AI-generated videos to authenticate their origin.
This move is part of an ongoing industry-wide effort to ensure that consumers can distinguish between real and AI-manipulated media. Meta’s solution complements efforts like Microsoft’s initiatives, which have also included legal action against cybercriminals exploiting AI to generate harmful content, such as deepfake videos.
In a notable case, Microsoft recently filed a lawsuit against a hacking group that used stolen Azure OpenAI credentials to generate malicious deepfakes. These fake videos were designed to manipulate targets into disclosing sensitive information. This lawsuit underlines the increasing frequency of AI-related cybersecurity threats and the necessity for businesses to adopt robust defenses against such attacks.
Proactive AI Defense Solutions
The integration of AI into cybersecurity platforms is essential to keeping up with the sophistication of modern threats. Microsoft’s Security Copilot platform, for example, has recently expanded to include AI agents that automate the detection of phishing, vulnerability management, and identity protection. These AI agents are part of a broader trend where companies are leveraging AI not only to respond to cyberattacks but also to anticipate and mitigate potential risks.
Similarly, Adaptive Security is using AI to simulate realistic attack scenarios, providing businesses with a testing ground to strengthen their defenses. The company’s platform aims to teach employees how to recognize AI-powered threats before they result in significant losses. This proactive approach is crucial as AI technologies like deepfakes continue to evolve and become more difficult to detect.
According to Microsoft, the integration of AI into its security tools has already shown promising results in discovering vulnerabilities in foundational systems. These discoveries—such as flaws found in bootloaders that could potentially be exploited—highlight the need for AI-driven tools to identify risks that traditional cybersecurity methods may overlook.
In another AI focused initiative, NTT DATA and Palo Alto Networks last year introduced MXDR, an AI-enhanced cybersecurity service aimed at protecting high-risk industries. The platform uses AI based detection to quickly identify patterns that could indicate potential breaches, allowing companies to react promptly.
AI-Driven Cybersecurity: A Global Challenge
As AI tools like deepfakes and voice cloning continue to advance, the risks associated with their misuse are becoming more pervasive. A global surge in AI-driven cybercrime, from Southeast Asia to Europe, has raised alarms about the reach and sophistication of these threats. In Southeast Asia alone, cybercriminals exploited AI technologies to perpetrate fraud that amounted to $37 billion in 2023.
Governments and tech companies are beginning to recognize the urgency of addressing these risks. In 2024, Europol warned that AI-generated content is complicating efforts to combat organized crime, noting the increasing use of AI for child exploitation and financial fraud. This highlights the need for unified action across the tech industry to ensure that AI technologies are used responsibly, while also defending against those who seek to exploit them for malicious purposes.
The Path Forward: Scaling AI Defenses
The $43 million in funding secured by Adaptive Security will help the company scale its AI-powered defense solutions and develop new tools to protect against AI-generated attacks. With OpenAI’s backing, Adaptive Security is positioned to lead the charge in combating these emerging threats. The company’s platform, which simulates deepfake attacks and trains employees to recognize AI-driven threats, is a key part of the future of cybersecurity.
However, as AI technologies continue to advance, so too will the sophistication of cybercriminals who seek to exploit them. To stay ahead of these risks, businesses will need to invest in cutting-edge cybersecurity tools that can adapt to new and emerging threats. The collaboration between OpenAI and Adaptive Security is a step in the right direction, but it will be essential for the industry to continue innovating and working together to address the rapidly evolving threat landscape.
Final Thoughts on the Evolving Threat Landscape
The growing reliance on AI tools, both for malicious purposes and in defense, makes it increasingly difficult for businesses to navigate the complexities of cybersecurity. As companies like Adaptive Security ramp up their efforts to counter these threats, it becomes evident that a multi-layered, AI-driven defense strategy will be essential to safeguard against the new generation of cybercrime.
Despite the promising advances made by companies like Adaptive Security, the road ahead is fraught with challenges. As AI technology continues to evolve rapidly, businesses will need to stay agile, adopting new security measures and continuously refining their defenses to keep pace with the threats. The collaboration between AI companies and cybersecurity startups is crucial in developing more sophisticated and proactive defense mechanisms, ensuring that businesses can not only respond to but also anticipate the increasingly sophisticated tactics employed by cybercriminals.
For now, the $43 million investment in Adaptive Security is just one of many steps being taken to build a more resilient cybersecurity ecosystem. However, with AI’s potential for both good and harm, it is clear that the battle for digital security is only just beginning. As we look ahead, the key will be ensuring that the same technology driving innovation can also be harnessed effectively to safeguard against the growing wave of AI-driven cybercrime.