X
Innovation
Why you can trust ZDNET : ZDNET independently tests and researches products to bring you our best recommendations and advice. When you buy through our links, we may earn a commission. Our process

'ZDNET Recommends': What exactly does it mean?

ZDNET's recommendations are based on many hours of testing, research, and comparison shopping. We gather data from the best available sources, including vendor and retailer listings as well as other relevant and independent reviews sites. And we pore over customer reviews to find out what matters to real people who already own and use the products and services we’re assessing.

When you click through from our site to a retailer and buy a product or service, we may earn affiliate commissions. This helps support our work, but does not affect what we cover or how, and it does not affect the price you pay. Neither ZDNET nor the author are compensated for these independent reviews. Indeed, we follow strict guidelines that ensure our editorial content is never influenced by advertisers.

ZDNET's editorial team writes on behalf of you, our reader. Our goal is to deliver the most accurate information and the most knowledgeable advice possible in order to help you make smarter buying decisions on tech gear and a wide array of products and services. Our editors thoroughly review and fact-check every article to ensure that our content meets the highest standards. If we have made an error or published misleading information, we will correct or clarify the article. If you see inaccuracies in our content, please report the mistake via this form.

Close

WormGPT: What to know about ChatGPT's malicious cousin

Should we be concerned about the cybercriminal's answer to ChatGPT? Here's everything you need to know.
Written by Charlie Osborne, Contributing Writer
redkeys-gettyimages-1563732897
cihatatceken/Getty Images

When ChatGPT was made available to the public in 2022, the AI chatbot took the world by storm. 

The software is developed by OpenAI, an AI and research company. ChatGPT is a natural language processing tool that is able to answer queries and provide information gleaned from datasets. It has become a valued tool for on-the-fly information gathering, analysis, and writing tasks for millions of users worldwide. 

The first publicly available version of ChatGPT had a significant limitation: It could not access the internet. However, paying subscribers can now use browser plugins to allow the chatbot to access the web in real time and it will provide links to the sources used for its responses. 

While some experts believe the technology could reach an internet level of disruption, others note that ChatGPT demonstrates 'confident inaccuracy.' Students in droves have been caught plagiarizing coursework by way of the tool. Unless datasets are verified, ChatGPT and other large language models (LLMs) could become unwitting spreaders of misinformation and propaganda. 

Also: The best AI chatbots

Indeed, now the chatbot can access the web, it will be interesting to see the long-term impact on the validity of its results and responses. 

Regulatory concerns also exist, with the US Federal Trade Commission (FTC) investigating Open AI's handling of personal information and the data used to create its language model. 

Also: Generative AI will far surpass what ChatGPT can do. Here's everything on how the tech advances

Beyond data protection concerns, however, whenever a new technological innovation is made, so are pathways for abuse. It was only a matter of time before AI chatbots and LLMs were weaponized for malicious purposes -- and some tools are already on the market, known as WormGPT and FraudGPT.

What is WormGPT?

On July 13, researchers from cybersecurity firm SlashNext published a blog post revealing the discovery of WormGPT, a tool being promoted for sale on a hacker forum.

According to the forum user, the WormGPT project aims to be a blackhat "alternative" to ChatGPT, "one that lets you do all sorts of illegal stuff and easily sell it online in the future."

Also: How to find and remove spyware from your phone

SlashNext gained access to the tool, described as an AI module based on the GPT-J language model. WormGPT has allegedly been trained with data sources, including malware-related information -- but the specific datasets remain known only to WormGPT's author. 

It may be possible for WormGPT to generate malicious code, for example, or convincing phishing emails. 

What is WormGPT being used for?

WormGPT is described as "similar to ChatGPT but has no ethical boundaries or limitations."

ChatGPT has a set of rules in place to try and stop users from abusing the chatbot unethically. This includes refusing to complete tasks related to criminality and malware. However, users are constantly finding ways to circumvent these limitations with the right prompts.

Also: The ethics of generative AI: How we can harness this powerful technology

The researchers were able to use WormGPT to "generate an email intended to pressure an unsuspecting account manager into paying a fraudulent invoice." The team was surprised at how well the language model managed the task, branding the result "remarkably persuasive [and] also strategically cunning."

While they didn't say if they tried the malware-writing service, it is plausible that the AI bot could -- given that ChatGPT's limitations do not exist. 

What does the future hold for WormGPT?

According to security expert Brian Krebs, WormGPT is the creation of a Portuguese developer, "Last," who is in his twenties. The system is -- or, was -- being used by approximately 200 customers, and due to media interest in the chatbot, Last appeared to be trying to move the project in a new direction. 

Also: Cybersecurity 101: Everything on how to protect your privacy and stay safe online

Last said he wanted to change the conversation from malicious to uncensored, and to do so, constraints are being implemented in WormGPT. Last told Krebs:

"Anything related to murders, drug traffic, kidnapping, child porn, ransomwares, financial crime. We are working on blocking BEC too, at the moment it is still possible but most of the times it will be incomplete because we already added some limitations. Our plan is to have WormGPT marked as an uncensored AI, not blackhat."

Also: This AI-generated crypto invoice scam almost got me, and I'm a security pro

However, a day after the report was published, the Telegram developer channel for WormGPT was closed down. It could be that the project is still ongoing or will be rebranded, but time will tell. The developer said:

"At the end of the day, WormGPT is nothing more than an unrestricted ChatGPT. Anyone on the internet can employ a well-known jailbreak technique and achieve the same, if not better, results by using jailbroken versions of ChatGPT." 

What about FraudGPT?

According to Trustwave research, an alternative malicious LLM appeared on the market in July 2023. Known as FraudGPT, the developer has advertised the service on Telegram and the Dark Web as an "unrestricted alternative for ChatGPT."

Also: 9 top mobile security threats and how you can avoid them

The developer claims that FraudGPT is great for learning how to hack, for writing malware and malicious code, for creating phishing content, and for finding vulnerabilities. 

It's not possible to verify the developer's claims concerning the chatbot, which is being offered on a subscription basis. 

Are WormGPT and FraudGPT the same as ChatGPT?

No. ChatGPT has been developed by OpenAI, a legitimate and respected organization. 

WormGPT and FraudGPT are not their creations. Instead, they are examples of how cybercriminals can take inspiration from advanced AI chatbots to develop their own malicious tools using LLMs. 

Will we see more tools like WormGPT and FraudGPT in the future?

Even in the hands of novices and your typical scammer, natural language models could turn basic, easily avoided phishing and BEC scams into sophisticated operations more likely to succeed. There's no doubt that where money is to be made, cybercriminals will pursue the lead -- and tools like WormGPT are only the start of a new range of products set to be traded in underground markets. 

Also: 6 skills you need to become an AI prompt engineer

As researchers have found, some threat actors are attempting to cash in on malicious LLMs by offering subscription models. Prices range from $100 per month to thousands of dollars for private setups. 

We should also consider that researchers have demonstrated how to bypass safety guardrails in ChatGPT, Bard, and Claude. Even if malicious LLMs don't go beyond a few models on the underground market, cybercriminals may still use legitimate services for their own ends. 

What do regulators say about the abuse of AI tools?

  • Europol: Europol said in the 2023 report, "The impact of Large Language Models on Law Enforcement," that "it will be crucial to monitor [...] development, as dark LLMs (large language models) trained to facilitate harmful output may become a key criminal business model of the future. This poses a new challenge for law enforcement, whereby it will become easier than ever for malicious actors to perpetrate criminal activities with no necessary prior knowledge."
  • Federal Trade Commission: The FTC is investigating ChatGPT maker OpenAI over data usage policies and inaccuracy.
  • UK National Crime Agency (NCA): The NCA warns that AI could prompt an explosion in risk to young people and abuse.
  • UK Information Commission's Office (ICO): The ICO has reminded organizations that their AI tools are still bound by existing data protection laws.

Can ChatGPT be used for illegal purposes?

chatgpt phishing email prompt
Screenshot by Charlie Osborne/ZDNET

Not without covert tactics, but with the right prompts, many natural language models can be persuaded to particular actions and tasks. 

Also: 6 simple cybersecurity rules you can apply now

ChatGPT, for example, can draft professional emails, cover letters, resumes, purchase orders, and more. This alone can remove some of the most common indicators of a phishing email: Spelling mistakes, grammar issues, and secondary language problems. This alone could cause a headache for businesses attempting to detect and train their staff to recognize suspicious messages.  

SlashNext researchers say, "cybercriminals can use such technology to automate the creation of highly convincing fake emails, personalized to the recipient, thus increasing the chances of success for the attack."

So, even if an LLM refuses an obviously illegal prompt, there are still ways that cybercriminals can take advantage of these technologies. 

Also: 7 advanced ChatGPT prompt-writing tips you need to know

For step-by-step instructions on using ChatGPT for legitimate purposes, check out ZDNET's guide on how to start using ChatGPT

How much does ChatGPT cost?

ChatGPT's basic model is free to use. The tool can answer general queries, write content and code, or generate prompts for everything from creative stories to marketing projects. 

AlsoHow to access, install, and use AI ChatGPT-4 plugins (and why you should)

The ChatGPT Plus subscription covers usage on chat.openai.com, priority for queries (reducing wait times), and includes advanced features for $20 per month. A version of ChatGPT for enterprise users is also available. 


Editorial standards