AI Attacks AI: Open WebUI Interface Used By Popular AI Chatbots Breached by Malware

Attackers are exploiting misconfigured Open WebUI, a web-based interface for AI models with AI-generated malware.

Attackers are exploiting misconfigured Open WebUI instances using AI-generated malware to compromise systems. Open WebUI is used by popular AI chatbot platforms such as Ollama and LM Studio to provide a self-hosted, browser-based interface for interacting with large language models

The sophisticated campaign marks a concerning escalation. AI tools now not only craft malicious payloads but are also exploitation targets. The attacks impact Linux and Windows, aiming to install cryptominers and infostealers via advanced evasion.

This incident highlights a critical new vulnerability as AI interfaces designed for productivity can also become significant attack surfaces if not properly secured. Sysdig’s investigation found attackers gained initial access to an Open WebUI system, which was exposed online with administrative rights and no authentication by uploading a heavily obfuscated Python script.

The researchers noted stylistic hallmarks of AI generation. A ChatGPT code detector analysis, cited by Sysdig, concluded the script was “highly likely (~85–90%) is AI-generated or heavily AI-assisted. The meticulous attention to edge cases, balanced cross-platform logic, structured docstring, and uniform formatting point strongly in that direction.” The AI-assisted malware, dubbed “pyklump” by the research team, served as the primary vector for the ensuing attack.

AI’s dual role—a tool for malware creation and an exploitation target—presents a new cybersecurity challenge and highlighs the urgent need for stringent security around AI applications and infrastructure. This is especially true as self-hosted AI tools grow in popularity.

Anatomy of an AI-Driven Intrusion

The AI-generated Python script, once executed via Open WebUI Tools, initiated a multi-stage compromise. Sysdig reported a Discord webhook facilitated command and control (C2) communications. This technique is a growing trend as it easily blends with legitimate network traffic.

To avoid detection, attackers leveraged ‘processhider,’ a utility that makes malicious processes like cryptominers disappear from standard system listings by intercepting and modifying the output of process-querying system calls. Furthermore, they used ‘argvhider’ to hide crucial command-line parameters, such as mining pool URLs and wallet addresses; this tool achieves obscurity by altering the process’s argument vector in memory so that inspection tools cannot read the original sensitive data

The Windows attack path involved installing the Java Development Kit (JDK). This was to run a malicious JAR (Java Archive) file, application-ref.jar, downloaded from a C2 server. This initial JAR acted as a loader for further malicious components. These included INT_D.DAT, a 64-bit Windows DLL (Dynamic Link Library) featuring XOR-decoding (a method of encryption) and sandbox evasion.

Another component was INT_J.DAT. The latter JAR contained another DLL, app_bound_decryptor.dll, alongside various infostealers. These targeted credentials from Chrome browser extensions and Discord. The app_bound_decryptor.dll itself employed XOR encoding, used named pipes (a mechanism for inter-process communication), and incorporated sandbox detection features.

Over 17,000 Open WebUI instances are reportedly exposed online, according to Shodan data cited by Sysdig which creates a substantial potential attack surface. 

AI’s Expanding Role in Cyber Conflict

This Open WebUI exploitation is a recent example within a broader pattern as AI is increasingly integrated into cybercriminal operations. As early as October 2024, Microsoft reported a surge in AI-driven cyberattacks exceeding 600 million daily incidents, highlighting that attackers “are seeing an increase in how cybercriminals can automate their methods through generative AI.” Their Digital Defense Report 2024 also stated that “the volume of attacks is just too great for any one group to handle on their own”

An increasing number of AI-driven malware campaigns are using fake applications and CAPTCHAs to target users. They often use dark web AI tools like WormGPT and FraudGPT for crafting sophisticated phishing emails and malware.

By January 2025, phishing attack success rates had reportedly tripled year over year. This was largely attributed to AI’s ability to create more convincing and localized lures, according to Netskope’s Cloud and Threat Report. LLMs can provide better localization and more variety to try to evade spam filters and increase the probability of fooling victims.

The AI Cybersecurity Arms Race

While attackers leverage AI, the cybersecurity industry concurrently develops AI-powered defenses. Google, for example, launched Sec-Gemini v1 in April. This AI model assists security professionals with real-time threat detection and analysis. This initiative followed earlier successes, such as Google’s Big Sleep AI agent which last year identified a significant vulnerability in the SQLite database engine. Google had announced the discovery and remediation of that flaw before it impacted users.

Other major vendors also bolster their AI capabilities. Fortinet, last November expanded its AI security tools with new integrations for improved threat detection. In April 2025, Google further solidified its AI security strategy by unveiling its Unified Security platform, which integrates Gemini AI to consolidate threat detection and response using structured reasoning. This contrasts with approaches like Microsoft’s Security Copilot, which focuses more on modular automation.

Markus Kasanmascheff
Markus Kasanmascheff
Markus has been covering the tech industry for more than 15 years. He is holding a Master´s degree in International Economics and is the founder and managing editor of Winbuzzer.com.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
We would love to hear your opinion! Please comment below.x
()
x