Kaspersky has discovered a new trend in cybercrime: the trade of malicious prompts designed to breach the security measures of AI applications like ChatGPT. During 2023, their research identified 249 such prompts being offered for sale on various online platforms. This finding highlights the increasing sophistication of cybercriminals in exploiting the capabilities of large language models (LLMs).
Malicious Use of AI Technology
The use of ChatGPT and other LLMs for illicit activities has sparked significant concern. Kaspersky noted over 3,000 discussions in Telegram channels and forums on the dark web, focusing on the illegal use of these AI systems. These conversations suggest a shift in the cybercrime landscape, where even less skilled individuals, often referred to as “script kiddies,” can perform actions that traditionally required more advanced technical knowledge. By simplifying the process through the use of prompts, the barrier to entry into cybercriminal activities has been lowered markedly.
The research also points to a growing trade in stolen ChatGPT credentials and compromised premium accounts, signaling a broader trend of cybercriminals aiming to exploit AI technology.
Security Implications and Response
Attempts to “jailbreak” ChatGPT, or bypass its built-in restrictions, have become increasingly common, with some users actively modifying prompts to elicit forbidden information from the AI. Interestingly, some guardrail circumventions appear unnecessary. Kaspersky’s team presented ChatGPT with a request for a list of endpoints where Swagger Specifications or API documentation might be leaked. The model initially denied the request, but upon repetition, it provided a list, alongside a warning against misuse.
Information like this can serve both legitimate and illegal purposes. For example, developers could use such data for security research, but it also opens up opportunities for nefarious activities.
In light of these developments, Kaspersky cautions that, although there is considerable speculation about AI writing polymorphic malware—software capable of changing its code to avoid detection—no such malware has yet been detected. Nonetheless, the possibility remains a concern for the future. The UK’s National Cyber Security Centre (NCSC) has also raised alarms about the potential role of AI in improving the capabilities of ransomware and state-backed malware.
Last Updated on November 7, 2024 10:51 pm CET