Government-backed hacking groups from Iran, China, North Korea, and Russia are increasingly turning to generative AI tools like Google Gemini to refine their cyber operations, according to a new report from Google’s Threat Intelligence Group (GTIG).
The findings indicate that AI has not yet enabled hackers to create fundamentally new attack methods but is enhancing existing tactics by improving efficiency, automation, and scalability.
Google’s analysis highlights how AI-assisted hacking activities range from phishing and malware development to reconnaissance on military targets and network exploitation. “Threat actors are experimenting with Gemini to enable their operations, finding productivity gains but not yet developing novel capabilities,” the report states.
Iran: AI-Driven Phishing and Cyber Espionage
Among all state-backed cyber actors, Iranian hacking groups have emerged as some of the most aggressive adopters of artificial intelligence, with APT42 leading the charge in AI-assisted cyber operations.
Google indicates that Iranian state-sponsored hackers have been particularly active in leveraging AI tools for spear-phishing campaigns, reconnaissance, and the automation of social engineering tactics.
The findings highlight that Iran has been the heaviest user of Gemini among government-backed hacking groups, surpassing even China in its frequency of AI-assisted cyber activity.
One of the most critical ways AI is enhancing Iran’s cyber operations is through phishing. By harnessing generative AI models, Iranian threat actors can create highly convincing phishing emails in multiple languages, including English, Hebrew, and Farsi.
Related: AI-Assisted Ransomware Group FunkSec Drives Record-Breaking Cyberattacks in December 2024
These emails are designed to impersonate trusted entities such as security officials, diplomats, and high-ranking executives, making them far more difficult to detect compared to traditional phishing attempts.
The sophistication of AI-generated phishing content allows attackers to tailor messages to specific individuals or organizations, increasing the likelihood of a successful breach. The use of AI enables attackers to rapidly generate variations of phishing messages, helping them evade detection by email filtering systems that rely on recognizing repeated patterns.
Beyond phishing, Iranian groups have also employed AI for reconnaissance operations, using it to gather intelligence on Western defense institutions, cybersecurity firms, and government agencies. AI-enhanced reconnaissance allows Iranian hackers to scan publicly available information, extract relevant details, and identify vulnerabilities in their target’s digital infrastructure.
This level of automation streamlines the intelligence-gathering process, enabling attackers to pinpoint high-value targets with greater speed and accuracy. The ability to process large volumes of data also aids in mapping organizational structures, personnel relationships, and security policies, giving hackers a deeper understanding of their targets before launching attacks.
Another area where Iranian hackers have capitalized on AI is misinformation campaigns. Iran has long relied on propaganda and influence operations to further its geopolitical objectives, and AI has made it easier to produce, translate, and distribute false or misleading narratives.
Related: Phishing Click Rates Triple in 2024 as Cybercriminals Exploit AI
By using AI-generated content, Iranian actors can create fake news articles, manipulated social media posts, and deepfake videos designed to sway public opinion, discredit adversaries, and manipulate narratives.
The report highlights that AI-powered misinformation efforts have primarily targeted Middle Eastern adversaries and Western policymakers, aligning with Iran’s broader strategy of political influence and psychological warfare.
Google underscores that Iranian hackers are particularly invested in using AI to craft realistic, context-aware emails that bypass traditional detection measures. This capability has made AI-assisted phishing campaigns more convincing and difficult to block, raising concerns about the ability of traditional cybersecurity defenses to keep pace with evolving tactics.
“Over 30% of Iranian APT actors’ Gemini use was linked to APT42,” the report states, emphasizing the extent to which AI has become an integral tool in refining Iran’s cyberattack strategies.
One of the most troubling aspects of Iran’s AI usage is its application in vulnerability research. Iranian hackers have been found using AI to automate the process of identifying security flaws, with a particular focus on widely used systems such as Microsoft Exchange, Windows Remote Management (WinRM), and remote access protocols.
AI models assist in analyzing technical documentation, reverse-engineering software components, and predicting potential attack vectors, allowing hackers to identify exploitable weaknesses more efficiently than ever before.
Related: Microsoft Says AI-Driven Cyberattacks Surge to Over 600 Million Daily Incidents
The ability to automate vulnerability research means that Iranian groups can discover and exploit security flaws before patches are developed, increasing the effectiveness of their attacks.
These findings align with prior reports showing that Iran-backed hackers frequently exploit zero-day vulnerabilities—software flaws that are unknown to vendors and, therefore, have no immediate fix.
Exploiting these vulnerabilities allows attackers to gain initial access to corporate and government networks, enabling further intrusion, data exfiltration, or sabotage.
The use of AI in streamlining reconnaissance, phishing, and vulnerability research suggests that Iranian cyber operations are becoming more efficient, targeted, and scalable, reinforcing the urgent need for advanced defensive measures against AI-assisted cyber threats.
China: Reconnaissance and AI-Assisted Network Exploits
Chinese state-sponsored hacking groups have increasingly turned to AI-powered reconnaissance and network exploitation techniques to enhance their cyber espionage capabilities.
According to Google’s Threat Intelligence Group (GTIG), China’s AI usage has primarily focused on intelligence gathering rather than destructive cyberattacks, reinforcing its long-term approach to cyber infiltration.
Unlike Iran’s reliance on AI for phishing and social engineering, China’s hackers have deployed AI models to map target networks, analyze intelligence data, and optimize post-compromise operations.
A key aspect of China’s AI-assisted cyber strategy involves scanning and mapping U.S. government and defense networks to identify vulnerabilities that could be exploited for persistent access. AI tools have allowed Chinese hacking groups to automate reconnaissance efforts, accelerating the process of analyzing network structures, identifying weak points, and cataloging exposed endpoints.
These efforts are not limited to network infrastructure alone; attackers have also used AI to process and extract insights from open-source intelligence (OSINT), allowing them to identify intelligence officials, defense personnel, and other high-value targets.
The GTIG report confirms that Chinese hackers have used Google Gemini to assist in researching military and government targets, focusing on gathering organizational intelligence that could facilitate long-term espionage efforts. The hackers sought information on U.S. military personnel, cybersecurity professionals, and intelligence community insiders, demonstrating a clear intent to strengthen China’s counterintelligence operations.
AI-driven OSINT processing has provided these actors with an ability to rapidly scan vast amounts of publicly available data, making it easier to track personnel movements, understand relationships within security agencies, and anticipate potential vulnerabilities in classified operations.
Beyond reconnaissance, AI has played a role in post-compromise exploitation, with Chinese hackers developing scripts designed to escalate privileges, steal credentials, and evade endpoint security measures. Google’s report notes that Chinese APT actors used Gemini to conduct reconnaissance, for scripting and development, to troubleshoot code, and to research how to obtain deeper access to target networks.
AI-generated scripts allow attackers to adapt to security defenses more efficiently, particularly when trying to maintain access to high-value intelligence systems. The ability to use AI for troubleshooting compromised environments means that attackers can adjust their strategies in real-time, avoiding detection and refining their techniques based on observed security measures.
One of the more notable incidents detailed in the report is China’s attempt to extract internal details about Google’s own AI infrastructure. Hackers were found to be querying Gemini for system-level information, including details on kernel versions, IP addresses, and network configurations.
Google’s security measures successfully blocked these attempts, but the event underscores China’s interest in understanding the inner workings of Western AI systems. This aligns with broader concerns that China may seek to replicate or counteract Western AI advances by infiltrating commercial AI research and development environments.
Google suggests that China’s cyber strategy prioritizes long-term infiltration over immediate disruption, a finding consistent with previous U.S. intelligence assessments. Instead of launching overt attacks that might provoke retaliation, China appears to be laying the groundwork for sustained access to U.S. critical infrastructure, ensuring that it can leverage cyber intrusions for intelligence collection, economic espionage, and strategic advantage in future geopolitical conflicts.
These findings illustrate how AI is accelerating the pace of cyber espionage, providing Chinese actors with enhanced automation, increased operational flexibility, and deeper intelligence processing capabilities.
While Google has implemented countermeasures to detect and prevent AI misuse, the growing sophistication of AI-assisted reconnaissance and network exploitation presents a continuing challenge for national security and cybersecurity professionals worldwide.
Russia: AI’s Minimal Role in State-Sponsored Hacking
While state-sponsored hacking groups in China, Iran, and North Korea have rapidly adopted artificial intelligence for cyber operations, Russia’s use of AI remains relatively limited in direct hacking activities.
Google’s analysis suggests that Russian state-backed hackers are either avoiding Western AI platforms like Gemini or relying on domestically developed AI models for operational security reasons. Unlike other nations that have leveraged AI for phishing, network infiltration, and vulnerability research, Russia’s engagement with AI appears to be more concentrated on information warfare and digital deception rather than direct cyberattacks.
Google’s report states that Russia has made minimal use of AI-driven tools for technical cyber operations, a stark contrast to the high-volume AI-assisted phishing and reconnaissance observed in Iran and China.
However, instead of focusing on direct attacks, Russian actors have experimented with AI-generated propaganda, deepfake technology, and automated social media manipulation, particularly in relation to Ukraine, NATO, and broader Western geopolitical narratives.
According to the report, “Russia’s AI activity appears to be disproportionately concentrated on influence operations rather than cyber intrusion techniques.” This suggests that the Kremlin’s cyber strategy remains rooted in traditional malware-based attacks, espionage operations, and its long-established playbook of disinformation campaigns.
Although Russia’s engagement with AI in hacking is lower than expected, Google’s researchers indicate that Russian actors may be enhancing existing cyber operations through malware obfuscation and re-encryption techniques.
This means that while Russia might not be using AI to generate novel cyberattack methods, it is likely deploying AI to make its malware harder to detect. AI-enhanced encryption techniques can allow malware variants to evade traditional security measures by altering code patterns dynamically, an approach that has been observed in advanced persistent threat (APT) campaigns attributed to Russian hacking groups.
In the realm of AI-powered disinformation, Russia has demonstrated a far more active and sophisticated approach. The GTIG report highlights that Russian influence operations have leveraged AI to generate and amplify propaganda at scale, creating synthetic news articles, AI-generated social media personas, and even deepfake content to manipulate public perception.
“We observed Russian threat actors using AI-generated content to enhance state-backed messaging and expand the reach of their influence campaigns,” the report states, confirming that AI has played a role in automating and refining Russia’s long-standing efforts in digital propaganda.
A central focus of Russia’s AI-enhanced influence operations has been its narratives surrounding Ukraine and NATO, which have been widely disseminated across social media platforms. AI-generated text has been used to support pro-Kremlin narratives, attack Western policies, and distort public perception of military conflicts.
Google also suggests that AI models have helped Russian actors refine their social media engagement strategies, enabling more targeted and adaptive disinformation campaigns that respond to real-time events.
While Russia’s reluctance to integrate AI into direct hacking operations sets it apart from China and Iran, its heavy use of AI for digital influence campaigns highlights an evolving approach to information warfare. Google’s findings indicate that Russia may not be using AI to break into systems—but it is certainly using it to shape global narratives, manipulate public opinion, and sustain its hybrid warfare tactics.
AI in Disinformation and Influence Operations
Beyond hacking, AI is increasingly used for state-backed influence campaigns, automated disinformation, and propaganda creation. According to Google’s report, Iran, China, and Russia have all tested AI for social manipulation efforts. Chinese and Iranian information operations (IO) groups have:
- Used AI-generated content for political messaging in multiple languages.
- Employed AI-powered SEO techniques to amplify fake news visibility.
- Automated persona creation for influencing social media discussions.
As Google’s report states, “IO actors attempted to use Gemini for research, content generation including developing personas and messaging, translation and localization, and to find ways to increase their reach.”
This aligns with previous reports showing that AI-generated fake news and deepfakes are being used to undermine public trust in democratic institutions.
While Russia has been slower to use AI for direct hacking, its influence campaigns are more advanced in AI usage. The report suggests that:
- Deepfake technology is increasingly deployed in Russian disinformation efforts.
- AI-generated narratives target NATO, Ukraine, and Western governments.
- Social media bot networks are being optimized using AI models.
These findings indicate that AI is already reshaping global influence operations, with propaganda and cybercrime converging into a more automated, AI-driven ecosystem.
Security Measures and Google’s Response
Google has taken proactive steps to mitigate AI misuse, stating that Gemini’s security controls have successfully blocked attempts at AI-powered cyberattacks. However, the growing threat posed by open-source AI models is more difficult to contain. Google has implemented:
- Security controls in Gemini to prevent AI from generating malicious code.
- Account terminations for identified hacking group activities.
- AI misuse tracking via its Secure AI Framework (SAIF).
Yet, as Kent Walker, Google’s chief legal officer, warns, “America holds the lead in the AI race—but our advantage may not last.” This underscores the urgent need for AI security policies that keep pace with adversarial use cases.
As AI continues evolving, its dual-use nature will create both opportunities and risks. While AI has not yet created novel hacking techniques, it has increased efficiency, enabling both nation-state and criminal actors to scale operations.