OpenAI Reveals How Hackers Are Exploiting ChatGPT
OpenAI has reported that it has disrupted more than 20 foreign “influence operations” in which ChatGPT was used for debugging and developing malware, spreading disinformation, evading detection, and conducting phishing attacks. Previously, cybersecurity experts had already warned that hackers are actively using AI to create malware. For example, earlier this year, researchers from Proofpoint noted that attackers were distributing the Rhadamanthys infostealer using a PowerShell script written by AI. Last month, HP Wolf analysts discovered that AsyncRAT malware was also being spread using malicious code clearly created with AI.
In its report, OpenAI confirmed the abuse of ChatGPT’s capabilities and described specific cases where Chinese and Iranian hackers misused the chatbot to increase the effectiveness of their operations.
Case 1: The Chinese Group SweetSpecter
One of the groups highlighted by OpenAI is the Chinese group SweetSpecter, first discovered by Cisco Talos analysts in November 2023. At that time, experts reported that this group was engaged in cyber-espionage, mainly targeting Asian governments.
According to OpenAI, SweetSpecter attacks its targets by sending phishing emails with malicious ZIP archives disguised as support requests. When such an attachment is opened, a chain of infection is triggered, resulting in the SugarGh0st RAT infiltrating the victim’s system.
These attacks even affected the personal email addresses of OpenAI employees. During the investigation, the company found that SweetSpecter was using a cluster of ChatGPT accounts for scripting and vulnerability analysis.
SweetSpecter members asked the chatbot for:
- Information about vulnerabilities in various applications
- Searches for specific versions of Log4j vulnerable to the critical Log4Shell RCE issue
- Lists of popular CMS platforms used abroad
- Details on specific CVE identifiers
- How to create scanners that work across the internet
- Instructions on using sqlmap to upload a potential web shell to a target server
- Help exploiting infrastructure belonging to an unnamed car manufacturer
- Assistance with code for sending bulk text messages via communication services
- Debugging an extension for an unnamed security tool
- Debugging code that is part of a larger framework for sending text messages to specified numbers
- Topics that might interest government employees and suggestions for email attachment names to avoid blocking
- Examples of fake job offer messages, including their own drafts
Case 2: The Iranian Group CyberAv3ngers
The second case in the report involves the Iranian hacking group CyberAv3ngers, which typically targets industrial control systems in Western countries.
According to OpenAI, accounts linked to this group asked ChatGPT for default credentials used in widely deployed programmable logic controllers (PLCs), help developing custom bash and Python scripts, and code obfuscation techniques.
Additionally, Iranian hackers used ChatGPT for post-exploitation planning, researching ways to exploit specific vulnerabilities, and methods for stealing user passwords from macOS.
CyberAv3ngers members asked the chatbot for:
- Lists of widely used industrial routers in Jordan
- Lists of industrial protocols and ports that can be used to connect to the internet
- The default password for the Tridium Niagara device
- The default username and password for Hirschmann RS series industrial routers
- Information on recently disclosed vulnerabilities in CrushFTP and Cisco Integrated Management Controller, as well as older vulnerabilities in Asterisk Voice over IP
- Lists of power companies, contractors, and popular PLCs in Jordan
- Why a specific bash code snippet returns an error
- How to create a Modbus TCP/IP client
- How to scan a network for exploitable vulnerabilities
- How to scan ZIP files for vulnerabilities
- Sample C code for implementing process hollowing techniques
- How to obfuscate VBA script entries in Excel
- Advice on code obfuscation models (with code provided)
- How to copy the SAM file
- Alternatives to mimikatz
- How to use pwdump to export passwords
- How to access user passwords in macOS
Case 3: The Iranian Group Storm-0817
The third case concerns another Iranian group, Storm-0817. This group used ChatGPT to debug malware, create an Instagram scraper, translate LinkedIn profiles into Persian, and develop its own Android malware and related command infrastructure.
Storm-0817 members asked the chatbot for:
- Help debugging and implementing an Instagram scraper
- Translating LinkedIn profiles of Pakistani cybersecurity specialists into Persian
- Assistance debugging and developing Android malware and the supporting infrastructure
As a result, the malware created with the help of OpenAI’s chatbot gained the ability to steal contact lists, call logs, and files stored on the device, take screenshots, view browsing history, and determine the user’s exact location.
“At the same time, Storm-0817 used ChatGPT to develop server-side code needed to handle connections from compromised devices,” the OpenAI report states. “This allowed us to see that the command server for this malware was a WAMP (Windows, Apache, MySQL & PHP/Perl/Python) installation, which used the domain stickhero[.]pro during testing.”
All accounts used by the attackers have already been blocked, and related indicators of compromise, including IP addresses, have been shared with OpenAI’s cybersecurity partners.