Researchers Discover Nearly 3,000 Darknet Posts Discussing Chatbots and AI
Experts from Kaspersky Digital Footprint Intelligence have analyzed darknet messages related to the use of ChatGPT and other large language model (LLM)-based solutions. In 2023, they found over 2,890 such posts on underground forums and Telegram channels. The highest number of discussions occurred in April, with 509 messages recorded that month.
General Statistics on AI-Related Discussions
The company’s report states that these discussions can be divided into several main categories:
- Abuse of ChatGPT itself. For example, one post suggested using GPT to generate polymorphic malicious code with specific functionality. By accessing a legitimate domain (openai.com) from an infected object, an attacker could generate and execute malicious code, bypassing some standard security checks. Researchers note that, so far, no malware using this method has been found, but it could appear in the future.
- Criminal use of ChatGPT. Criminals are increasingly using ChatGPT in malware development and for other illegal purposes. For instance, one forum user described how AI helped solve a problem when processing dumps of user data.
- Jailbreaks. To make LLMs provide “forbidden” answers, hackers use various jailbreaks. These unlock additional features in ChatGPT and similar services. In 2023, researchers found 249 offers to distribute or sell such commands. Jailbreaks can also be used for legitimate improvements to these services.
- Malware. Hack forum participants actively exchange ideas on using AI to enhance malware and improve the effectiveness of cyberattacks overall. For example, one post described software for malware operators that used AI to protect the operator and automatically switch cover domains.
- Open-source and pentesting tools. Various ChatGPT-based solutions are being tested by open-source developers, including in projects for cybersecurity professionals. Hackers also pay attention to these developments. For example, GitHub hosts an open-source utility based on a generative model designed to obfuscate PowerShell code, helping evade detection by monitoring and security systems. Both pentesters and criminals can use such tools, and researchers have observed discussions about using them for malicious purposes.
- “Evil” ChatGPT alternatives. The popularity of chatbots has sparked interest in other projects aimed at cybercriminals, such as WormGPT, XXXGPT, and FraudGPT. These are offered as replacements for ChatGPT, without the original’s restrictions and with extra features (like tools for phishing campaigns and business email compromise). However, increased attention to these tools can backfire on their creators. For example, WormGPT was shut down in August 2023 due to widespread concerns about its potential threat. Additionally, many sites and ads offering access to WormGPT, discovered by Kaspersky experts, turned out to be phishing or scams.
- Account sales. Selling accounts for the paid version of ChatGPT, stolen from real users and companies, is also a popular darknet topic. In addition to hacked accounts, there is a market for automatically created free accounts. Criminals automate registration on the platform using fake or temporary data. These accounts have API request limits and are sold in bulk, allowing hackers to quickly switch to a new account when the previous one stops working.
“There are concerns that the development of large language models makes it easier for criminals to operate and lowers the barrier to entry into the industry. However, at this time, we have not seen real incidents involving such solutions. Nevertheless, technology is advancing rapidly, and it is likely that language models will soon enable complex attacks, so it is important to closely monitor this area,” commented Alisa Kulishenko, analyst at Kaspersky Digital Footprint Intelligence.