From simple tasks like creating food shopping lists to the more complex, such as writing essays and starting an Etsy business, ChatGPT has something for everybody—including criminals. Given its popularity and capabilities to compose an email or code, it was only a matter of time before cybercriminals would start using it for malicious purposes.
While OpenAI has certain protections and ChatGPT will refuse to help anyone attempting to write obviously malicious code, there are now malicious AI chatbots that will happily assist.
Two that have emerged recently are WormGPT and FraudGPT.
What is WormGPT?
Described as “ChatGPT’s malicious cousin”, WormGPT is an AI module based on the GPTJ language model developed early 2021. It was discovered by researchers from cybersecurity firm SlashNext in July after they gained access to the tool through a prominent online forum on the dark web.
According to Daniel Kelly, a reformed black-hat computer hacker who works with the firm to identify the latest threats and tactics: “WormGPT was allegedly trained on a diverse array of data sources, particularly concentrating on malware-related data […] and presents itself as a blackhat alternative to GPT models.”
It is designed to produce human-like text that can be used by cybercriminals to perform business email compromise (BEC) attacks. BEC is a type of phishing attack where a cybercriminal hacks and spoofs emails to impersonate a company’s CEO or other senior executives then emails employees requesting them to make a purchase or send money via wire transfer.
Already successful, the creators of WormGPT unveiled version two last month. This latest release comes packed with a lot of new features and enhanced capabilities such as unlimited characters, coding formatting and conversation saving.
What is FraudGPT?
Like WormGPT, FraudGPT is an AI-driven hacker tool sold on the dark web and Telegram that helps create content to facilitate cyberattacks. The tool is sold on a subscription basis, and was spotted by the Netenrich threat research team this summer. At a cost of $200 per month (or $1,700 per year) any subscriber can:
- Write phishing emails and social engineering content.
- Create exploits, malware and hacking tools.
- Discover vulnerabilities, compromised credentials and the best sites to use stolen card details.
- Access advice on hacking techniques and cybercrime.
Why should you and the industry be concerned?
The creation and proliferation of ChatGPT’s ‘evil cousins’ lowers the barrier for entry and democratises the execution of different types of attacks—even sophisticated attacks like BEC. Any attacker, even those with limited skills, can use this technology. Just as ChatGPT made AI widely accessible, so are these tools opening up phishing and other attacks to whoever is interested.
According to Daniel Kelley: “As the more public GPT tools are tuned to better protect themselves against unethical use, the bad guys will create their own. The evil counterparts will not have those ethical boundaries to contend with.”
This is exactly what’s happening. In the last two months alone, three more of these tools got discovered (Evil-GPT, XXXGPT and Wolf GPT) and there will likely be more by the end of the year.
As businesses start planning their budget for 2024, they must keep in mind that AI tools are a double-edged sword and that although they can help reduce the pressure their IT teams face due to the skills shortage, they can also be used to exploit vulnerabilities. Businesses must therefore make sure they at least provide their employees with proper training and awareness and implement strong email authentication protocols.