Cybersecurity experts at Proofpoint have identified a sophisticated email campaign that leveraged artificial intelligence to create a PowerShell script for malware distribution. The campaign, orchestrated by the threat actor TA547, also known as Scully Spider, targeted numerous organizations across Germany, deploying the Rhadamanthys information stealer. This development underscores the growing complexity of cyber threats and the innovative use of AI by cybercriminals.
The Attack Vector: Impersonation and Sophistication
TA547, an initial access broker known for its diverse malware distribution, has shifted its focus towards utilizing the Rhadamanthys modular stealer, a tool that has been actively distributed to various cybercrime groups since September 2022. In their latest operation, the attackers impersonated the Metro cash-and-carry brand to send phishing emails to dozens of German organizations. These emails contained a ZIP archive, password-protected with ‘MAR26’, which housed a malicious shortcut file. This file, upon execution, triggered a PowerShell script that decoded and executed the Rhadamanthys executable directly in memory, a technique that helps avoid detection by not writing to the disk.
The Role of AI in Cyber Attacks
The PowerShell script used in the attack exhibited characteristics that suggest it was generated with the assistance of an artificial intelligence system, such as OpenAI’s ChatGPT, Google’s Gemini, or Microsoft’s Copilot. The presence of specific comments for each component, marked by a pound/hash sign (#), is not typical in scripts written by humans but is common in code produced by generative AI solutions. While direct evidence linking the script to an AI-generated origin is elusive, the similarities in structure and content with AI-generated examples provide a strong indication of AI’s involvement.
Since the release of ChatGPT in late 2022, there has been a noticeable increase in the utilization of AI by cybercriminals for creating more convincing phishing emails, identifying vulnerabilities, and developing phishing pages. Moreover, nation-state actors from countries like China, Iran, and Russia have also been reported to leverage generative AI to enhance their cyber operations. In response to the abuse of AI technologies for malicious purposes, OpenAI has taken steps to block accounts associated with state-sponsored hacker groups, indicating the dual-use nature of AI technologies in cybersecurity.
Last Updated on November 7, 2024 9:03 pm CET