HomeWinBuzzer NewsAI-Driven Malware: How Fake Apps and CAPTCHAs Target Windows and macOS Users

AI-Driven Malware: How Fake Apps and CAPTCHAs Target Windows and macOS Users

A surge in AI-driven malware, including the use of fake CAPTCHAs and counterfeit AI apps is targeting Windows and macOS systems.

-

Cybersecurity researchers have flagged a surge in malware distribution tactics involving fake CAPTCHAs and counterfeit AI applications. This new wave of cyberattacks targets both Windows and macOS systems and is closely linked to malicious software strains like Lumma Stealer and AMOS.

These malware types are notorious for their ability to collect sensitive user data, including credentials, session cookies, and cryptocurrency wallet information.

Microsoft’s Digital Defense Report from October notes that over 600 million daily incidents now leverage automation. AI-generated phishing emails are harder to detect and bypass traditional security measures.

Russian-backed operations have used these tools to target Ukraine, blending traditional espionage with disruptive cyber tactics. Meanwhile, North Korea has expanded into AI-driven ransomware campaigns, and Iran has intensified cyber-influence operations across the Gulf region.

How Fake CAPTCHAs Have Become Malicious Entry Points

CAPTCHAs, once considered a basic tool to verify human interaction and block automated scripts, have been repurposed by attackers as a covert method for malware delivery. Security experts from Kaspersky report that since August 2024, cybercriminals have been embedding fake CAPTCHAs into sites ranging from file-sharing platforms to adult content hubs—areas notorious for lower security standards.

Lumma Stealer (aka LummaC2 Stealer), an information stealer written in C language is a primary payload seen in such fake CAPTCHA campaigns. The malware is engineered to search for files containing keywords linked to cryptocurrency wallets, passwords, and other sensitive data, which makes it particularly dangerous for users dealing with financial assets.

It can also access browser storage to collect saved passwords and cookies, allowing attackers to hijack active sessions and access user accounts without triggering alerts.

These counterfeit CAPTCHAs mimic legitimate verification steps but contain hidden scripts that, when triggered, initiate malicious processes. The unsuspecting user is typically asked to copy and paste commands that appear as standard verification steps but are actually obfuscated PowerShell scripts.

Obfuscation With Windows PowerShell

PowerShell, a powerful scrip-enabled tool included in Windows, has become a favorite tool for cybercriminals due to its ability to execute complex scripts directly within the operating system. Attackers often encode these scripts to mask their true purpose.

The scripts are often encoded in formats like Base64, a group of binary-to-text encoding schemes that transforms binary data into a sequence of printable characters, complicating detection and analysis by traditional security software.

Once executed, the scripts connect to remote command-and-control (C2) servers, downloading additional payloads and enabling more extensive system compromise. In many cases, the downloaded malware is disguised as legitimate software, such as BitLocker To Go, a tool known for encrypting removable storage.

This tactic exploits the built-in trust that users place in recognizable software names, allowing the malicious payload to operate with minimal suspicion. The use of trusted tools like PowerShell also complicates detection, as traditional antivirus programs may not flag these activities as harmful.

The Rise of Fake AI Tools in Malware Campaigns

The growing interest in artificial intelligence has given cybercriminals new avenues for attack. Fake AI-based applications, advertised as cutting-edge tools for tasks like video editing and image generation, are being used as bait.

One high-profile case involves EditProAI, a fraudulent video and image editing tool uncovered by cybersecurity researcher “g0njxa.” Promoted through social media ads featuring deepfake videos, EditProAI appeares legitimate and lures users into downloading malware-laden software.
 

The Windows version, labeled “Edit-ProAI-Setup-newest_release.exe,” uses a stolen code-signing certificate, enhancing its credibility. The macOS variant, “EditProAi_v.4.36.dmg,” similarly poses as a genuine tool, bypassing basic security checks on Apple systems.

Pureland, MacStealer and AMOS:Mac-Specific Threats

While Windows-based attacks often receive the most attention, macOS users are not immune to these tactics. Pureland, MacStealer and AMOS Atomic Stealer (AMOS). AMOS is sold via Telegram to cybercriminals for $1000 per month, offering them a full-featured while not particularly sophisticated infostealer. Threat actors can manage their campaigns through a web interface rented out from the developer. 

The AMOS malware is capable of harvesting credentials, browser cookies, and cryptocurrency wallet information, much like its Windows counterparts. Distributed through counterfeit AI tools and other deceptive applications, AMOS presents a growing threat to macOS users who may falsely assume their systems are less vulnerable to such attacks.

The Spectrum of Malware: Rilide, Vidar, IceRAT, and More

The payloads deployed through these fake CAPTCHAs and AI tools include a variety of data-stealing malware. Notable examples include Rilide Stealer, Vidar Stealer, IceRAT, and Nova Stealer.

Rilide Stealer, for example, is a browser extension that targets Chromium-based browsers like Google Chrome and Microsoft Edge, capturing browsing history and login credentials. It can even bypass two-factor authentication (2FA) by injecting scripts that allow attackers to intercept authentication tokens.

Vidar Stealer focuses on extracting sensitive data, including saved browser credentials and financial information, while IceRAT, rather a backdoor than a remote access Trojan, allows attackers to control compromised systems remotely. These tools are often distributed through Malware-as-a-Service (MaaS) platforms, enabling even less skilled cybercriminals to deploy sophisticated attacks.

Counterfeit ChatGPT Applications and PipeMagic

The appeal of generative AI tools like ChatGPT is also being leveraged by cybercriminals. Researchers at Kaspersky identified PipeMagic, a Trojan that poses as a ChatGPT application and targeted organizations primarily in Asia.

By 2024, PipeMagic expanded its reach, attacking enterprises in Saudi Arabia. Built using the Rust programming language, this malware employs legitimate libraries to appear credible while executing hidden, encrypted payloads.

PipeMagic allocates memory dynamically and established a communication channel with a C2 server hosted on Microsoft Azure, allowing attackers to download plugins and execute further malicious activities.

This unique approach includes creating named pipes for data transfer, which makes detection more difficult. The Trojan exemplifies how attackers combine obfuscation, memory allocation tricks, and legitimate platforms like Microsoft Azure to carry out advanced attacks.

Malvertising on Social Media Platforms

Malvertising, or the use of online ads for spreading malware, has also become increasingly sophisticated with the integration of AI. Bitdefender researchers reported how attackers hijack Facebook accounts to run sponsored ads impersonating popular AI tools, including Midjourney, DALL-E 3, and ChatGPT.

These campaigns often include AI-generated images and descriptions designed to replicate authentic promotions, convincing users to click on ads that redirect them to malicious download sites or cloud storage services like Dropbox and Google Drive.

A particularly notable case involved a fraudulent Midjourney Facebook page that amassed over 1.2 million followers and reached nearly 500,000 users through its ads before being taken down. The targeted demographic was primarily male users aged 25 to 55, with a significant concentration in European countries such as Germany, France, and Italy.

AI-Powered Malware on the Dark Web: WormGPT and Its Counterparts

The dark web has become a marketplace for AI tools engineered specifically for malicious purposes. One of the most alarming examples that already emerged in 2023, WormGPT, is basically a “Large Language Model Chatbot for Criminals running on the GPT architecture and expanded with six billion parameters back then.

The AI model is designed for cybercriminal use, automating the creation of phishing emails and malware scripts. Its capabilities make it a highly effective tool for launching targeted cyberattacks. With open-source large language models becoming more powerful, this trend is alarming.

Other AI tools, such as FraudGPT and Evil-GPT, extend these capabilities further. FraudGPT automates the creation of phishing pages and undetectable hacking tools, while Evil-GPT, built entirely in Python, can create malware that collects browser cookies, system data, and transmits this information to a server via webhooks. Other tools of this category are WolfGPT, DarkBard, and XXXGPT.
 

PoisonGPT, meanwhile, specializes in creating disinformation campaigns, subtly altering real events to manipulate public perception.

QR Code Phishing on the Rise

August saw a 2,000% increase in phishing campaigns using QR codes, especially through Microsoft Sway or Microsoft Teams. Chats or emails often include innocent looking QR codes linking users to fake Microsoft domains, giving attackers access when scanned.

Netskope Threat Labs documented that these scams often bypass text-based scanners, directing users to credential-stealing sites via QR codes embedded in phishing emails.

Such attacks typically target mobile devices, which lack the robust security measures found on desktops, making them easier to compromise.

GitHub’s Role in Malware Distribution

Malware campaigns are also exploiting GitHub, with more than 29,000 deceptive comments spreading Lumma Stealer by linking to password-protected files posing as legitimate updates.

GitHub is responding by removing many of these malicious posts, but developers are encouraged to verify code before using it. A larger campaign, run by the Stargazer Goblin group, involved over 3,000 fake GitHub accounts and distributed malware like RedLine Stealer and Atlantida Stealer, using compromised WordPress sites as distribution points.

Microsoft’s Strategies: Honeypots and Security Challenges

Microsoft has developed defensive strategies like deploying honeypots on its Azure platform, designed to mislead attackers into interacting with fake corporate accounts. This technique not only wastes the attackers’ resources but also gathers intelligence on their strategies. Microsoft security engineer Ross Bevington detailed this initiative at a cybersecurity event in October, emphasizing its effectiveness in tracking phishing tactics.

However, vulnerabilities persist particularly in Microsoft’s own software. In August, researchers exposed a flaw in Microsoft 365’s anti-phishing feature, where attackers could hide warnings in emails using CSS modifications. This allowed phishing emails to bypass user alerts. Microsoft has acknowledged the issue but has yet to implement a comprehensive fix.

Defensive Strategies for Users and Organizations

To counter these increasingly sophisticated threats, cybersecurity experts recommend implementing robust endpoint detection and response (EDR) solutions.

These tools can detect early signs of compromise and prevent further malware spread. Additionally, organizations should conduct ongoing employee training focused on recognizing phishing tactics and suspicious prompts.

Regular updates to cybersecurity protocols and software, alongside multi-factor authentication (MFA), add layers of protection that make it more difficult for malware to take hold. Companies should also consider using advanced threat intelligence services to stay ahead of emerging risks. Solutions that provide network-level protection can identify advanced threats early.

Public awareness is equally crucial. Users should be educated specifically for being able to verify the legitimacy of any AI tools they download and to avoid files from unofficial or suspicious sources.

Broader collaboration among tech firms, cybersecurity professionals, and governmental bodies will be essential to creating a unified defense against the growing array of AI-driven malware threats.

Markus Kasanmascheff
Markus Kasanmascheff
Markus has been covering the tech industry for more than 15 years. He is holding a Master´s degree in International Economics and is the founder and managing editor of Winbuzzer.com.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
We would love to hear your opinion! Please comment below.x
()
x
Table of Contents:
Mastodon