Unleashing WormGPT: The Dark Side of AI Emerges with Malicious Chatbot

In a concerning turn of events, a hacker has recently unveiled a sinister creation in the form of WormGPT, a malicious counterpart to ChatGPT. Reports indicate that the developer behind WormGPT is offering access to this program on a notorious hacking forum, raising alarm bells within the cybersecurity community. According to email security provider SlashNext, who conducted an assessment of the chatbot, it appears that cybercriminals are now leveraging similar AI models to facilitate their nefarious activities more easily.

Unlike its predecessors, such as ChatGPT and Google’s Bard, WormGPT lacks protective mechanisms to prevent it from responding to malicious commands. In a blog post, SlashNext highlighted the concerning trend, stating, “We have observed that malicious actors are now crafting custom modules akin to ChatGPT, but with simplified functionality for illicit purposes.”

The hacker initially introduced this chatbot project back in March, eventually launching it just last month. The developer’s stated aim was to create an alternative to ChatGPT that enables users to engage in various illegal activities and conveniently sell their exploits online. As the developer stated, “WormGPT offers a gateway to the world of blackhat activities, allowing individuals to partake in malicious endeavors from the comfort of their own homes.”

The developer further demonstrated the capabilities of WormGPT by sharing screenshots. These images showcased the chatbot’s ability to generate Python-based malware and provide insights into crafting malicious attacks. The creation process involved utilizing an older open-source large language model called GPT-J from 2021. The developer then trained this model on datasets related to malware creation, resulting in the birth of WormGPT.

SlashNext decided to put WormGPT to the test by evaluating its capacity to compose a persuasive email for a business email compromise (BEC) scheme, a prevalent form of phishing attack. The results were nothing short of unsettling. WormGPT successfully generated an email that not only exhibited remarkable persuasiveness but also revealed strategic cunning, underscoring its potential for sophisticated phishing and BEC attacks. Notably, the chatbot crafted the email flawlessly, devoid of any spelling or grammar errors that typically raise red flags in phishing attempts.

Summarizing their findings, SlashNext warned, “WormGPT is akin to ChatGPT but without ethical boundaries or limitations.” This experiment serves as a stark reminder of the significant threat posed by generative AI technologies like WormGPT, are even in the hands of new cybercriminals.

Thankfully, access to WormGPT comes at a steep price. The developer is selling monthly subscriptions for 60 Euros or annual subscriptions for 550 Euros. However, one dissatisfied buyer has expressed disappointment with the program’s performance, claiming it is “not worth any dime.” Nevertheless, the emergence of WormGPT serves as a foreboding indication of how generative AI programs could potentially fuel cybercrime as these technologies continue to advance.

Leave a comment