ChatGPT artificial intelligence and the democratization of cybercrime

The ease of use of the new artificial intelligence ChatGPT and its skills are starting to interest the cybercriminal community. Could this communication agent write a virus itself and thus transform anyone into a potential cybercriminal? A rather alarming view for experts interviewed by France 24, even though this tool is starting to be used by hackers.

“Earn $1,000 a day with ChatGPT”, or “This easy way to make money with ChatGPT”. Since the beginning of the year, similar messages, which were deciphered by experts from the cybersecurity company Check Point in a blog post published on January 6, began to flourish in popular forums among cybercriminals.

In fact, the new offspring of artificial intelligence in fashion since the end of November 2022 is not only useful for students who ask ChatGPT to write their homework or for office colleagues who spend of their time telling you that this AI is about to replace us.

Fraudulent emails and malicious code

The capabilities of the next-generation conversational agent created by OpenAI – that is, a “bot” capable of answering questions submitted to it and having a discussion – made a strong impression on hackers.

“We are starting to see the first concrete examples of what cybercriminals want to do with ChatGPT,” acknowledges Gérôme Billois, cybersecurity expert at IT security consulting firm Wavestone.

This AI started by helping cybercriminals to… write emails. But not just any: “phishing” messages, that is, intended to lead targets to click on a fraudulent link or download an attachment containing a virus.

The interest is above all “to allow non-English speakers to write emails without grammatical errors and of a professional quality”, defined Gérôme Billois. The era of fake emails sent from an Eastern European country in very questionable English is over. “For example, a cybercriminal could ask ChatGPT to write an email as if he were a surgeon talking to a colleague,” said Hanah Darley, a computer security expert for Darktrace, an American company of cyberdefense, interviewed by the TechCrunch website.

“Since late December, we have also seen an individual on one of the major English-language cybercriminal forums post malicious code [le composé central d’un virus informatique, NDLR] created with the help of OpenAI tool. He’s someone who admits to not understanding much about programming,” said Sergey Shykevich, head of threat research at Check Point.

Hence the fear that ChatGPT will favor the emergence of a generation of hackers less skilled in the art of code but doped up in AI. A kind of democratization of cybercrime thanks to this conversational agent that will offer to write viruses for you.

“ChatGPT certainly makes malicious codes more accessible to neophytes”, acknowledges Eran Shimony, senior computer security analyst for the American company CyberArk. But “it takes more than malicious code to get into a computer system,” said John Fokker, head of cyber investigations for American computer security company Trellix. ChatGPT can be just one small link in the cybercrime chain. It is necessary to set up the attack infrastructure, follow up the operations, to know which information is sensitive and which can be monetized on the Internet.

A bit like the Google trad of cybercrime

Not to mention “that in the current state, ChatGPT does not test the effectiveness of the malicious codes it can generate, and it requires a certain knowledge to verify the work of the AI”, explained Gérôme Billois. “We saw that the code is not always perfect. It’s a bit like Google’s translation tool: it’s convincing but you still need to improve the result a little”, sums up Sergey Shykevich.

Therefore, ChatGPT is not a great hacking weapon for apprentice hackers. However, it can make the dark side of computer security more accessible.

Sample discussion on a Russian underground forum about using chatGPT to try to intercept cryptocurrency transactions. © Trellix

This “bot” can be a top hacking teacher. “This can be especially useful for the younger generation of hackers who used to spend time reading documentation or chatting on forums. It can speed up their training,” said John Fokker. Above all, it is more attractive because it has a “more intuitive interface and [génère] more accurate answers than its predecessors”, says Gérôme Billois.

OpenAI has tried to put some safeguards in place to prevent malicious use of their 1001 answer bot. Thus, in theory, it is impossible to ask him clearly, for example, to “write the code for creating ransomware” and the citizens of a dozen countries – including Russia, Iran, China, Ukraine, etc. – should not use it.

Digital art forgers

But “these filters are pretty easy to get around,” said Omer Tsarfati, senior computer security researcher for CyberArk. For example, it only takes an ounce of subtlety in the turn of the question – for example by making sure you’re a computer security teacher who wants to submit an example of a virus to his students – to push ChatGPT to produce said malicious code. , noted one of the experts interviewed by France 24. In addition, there are already in Russian-speaking forums, cybercriminals who offer solutions to circumvent the geographical ban.

If the arrival of ChatGPT is generating such interest in the hacker community, it’s not just because it will help a new generation of cybercriminals mature. This tool can be just as useful for more seasoned hackers. “We succeeded in using it to develop a polymorphic virus”, meaning it can change shape to make it harder to detect, assured Eran Shimony, who will publish the results of his research on the matter on Tuesday, January 17.

Some are using it for new forms of online scams. They mix ChatGPT’s prose with the artistic touch of other AIs — like Dall-E, which turns texts into digital paintings — “to sell them on merchant sites like Etsy. These fake works have brought of up to 9 000 dollars” , said Sergey Shykevich, Check Point expert.

And this is just the beginning. “ChatGPT will evolve and probably become more sophisticated,” said Eran Shimony. This tool, which at the moment cannot do its own research on the Internet, should eventually be connected to the network, which will open other prospects. For example, he can search faster than any other person for the latest computer defects. “There will be a shorter time between the discovery of software vulnerabilities and their exploitation by malicious actors,” said John Fokker.

On the other hand, ChatGPT can also be used to better defend against computer attacks. And the experts interviewed do not rule out a near future where cybercriminals armed with ChatGPT will face defense systems equipped with ChatGPT.

Leave a Reply

Your email address will not be published. Required fields are marked *