GhostGPT: How a Telegram Bot Turns Novices into Criminal AI Pros

GhostGPT: How a Telegram Bot Turns Novices into Criminal AI Pros

In 2023, the world witnessed the emergence of the first generative AI models aimed at criminal activity. One of the most well-known was WormGPT, which demonstrated the ability to help hackers create malicious software. It was soon followed by WolfGPT and EscapeGPT, and recently, cybersecurity researchers discovered a new AI tool—GhostGPT.

According to experts from Abnormal Security, GhostGPT uses a hacked version of OpenAI’s ChatGPT bot or a similar language model stripped of all ethical restrictions.

“GhostGPT, having removed built-in safety mechanisms, provides direct and unfiltered responses to dangerous requests that traditional AI systems would block or flag,” the company wrote in a blog post on January 23.

Key Features of GhostGPT

The developers of GhostGPT actively promote it as a tool with four main features:

  • No censorship
  • High data processing speed
  • No logging, which helps avoid creating evidence
  • Ease of use

The tool is available directly through a Telegram bot, making it especially attractive to cybercriminals. GhostGPT is widely advertised on hacker forums and is primarily aimed at enabling business email compromise (BEC) attacks.

Real-World Testing and Capabilities

Researchers at Abnormal Security tested GhostGPT’s capabilities by asking it to create a phishing email using DocuSign. The result was extremely convincing, confirming the tool’s ability to deceive potential victims.

In addition to generating phishing emails, GhostGPT can be used to program malware and develop exploits.

Lowering the Barrier to Cybercrime

One of the key threats associated with this tool is that it lowers the barrier to entry for criminal activity. Thanks to generative AI, fraudulent emails are becoming more sophisticated and harder to detect. This is especially important for hackers whose native language is not English.

GhostGPT also offers convenience and speed: users don’t need to hack ChatGPT or set up open-source models. For a fixed fee, they get access and can immediately focus on carrying out attacks.

Leave a Reply