Cybercriminals are already using ChatGPT to own you

Cybercriminals are already using ChatGPT to own you

When ChatGPT—OpenAI’s major language model interface—was released to the public late last year, it was immediately apparent to many in the information security community that the tool could (in theory) be exploited by cybercriminals in a number of ways.

Now new findings from Check Point Research indicate that this is no longer a hypothetical threat.

According to the company, underground hacking forums on the dark web are already flooded with real-world examples of cybercriminals attempting to use the program for malicious purposes, creating info stealers, encryption tools and phishing lures for use in hacking and fraud campaigns. There are even examples of actors using it in more creative ways, such as developing cryptocurrency payment systems with real-time currency trackers to add to dark web marketplaces, or using it to generate AI art to sell on Etsy and other online platforms.

Sergey Shykevich, a threat intelligence manager at Check Point Research, told SC Media that while most of the examples they found matched how they thought cybercriminals might use the program, the speed of adoption did not.

“I think maybe the only really surprising thing is that it happened much faster than I thought it would happen. I didn’t think that within two to three weeks we would already see malicious tools other things in the underground,” he said.

In one forum, a cybercriminal bragged about recreating malware strains and hacking techniques by prompting ChatGPT with publicly available recipes, including a Python-based file stealer. Check Point researchers confirmed that the tool, while basic, actually works as advertised. The actor was also able to use a single piece of Java code to create a modular Powershell script capable of running real programs from a number of different families.

A cybercriminal describes how less technical actors can use ChatGPT to create a working infostealer. (Screenshot provided by Check Point Research)

While this actor has previously demonstrated technical skills that make it easier for script kids to carry out more dangerous high-level attacks, one of the main fears surrounding the emergence of ChatGPT was. Here, too, there is evidence that this is more than a theoretical possibility.

See also  Deus Ex Diaries Part Sixty-Four (Mankind Divided)

In another example, a private actor shared that they were able to create a Python-based script for encrypting and decrypting files, with the kicker being the admission that they had no prior coding experience and that this was the first script they had ever written. Again, analysis by Check Point researchers found that while the program was inherently benign, it was actually functional and could be “easily modified” to fully encrypt a computer’s files, similar to ransomware.

While this particular actor (posting under the USDoD handle) appears to have a low level of technical skill, Check Point notes that they are nonetheless a respected member of the community. In fact, it appears to be the same actor who was observed advertising a database of stolen FBI InfraGard member information for sale last month.

A user on an underground hacking forum boasts that they were able to create a script for encrypting and decrypting files with no coding experience. (Screenshot provided by Check Point Research)

There are also examples of other creative use of the program to facilitate a number of fraud-based activities. Another user was actually able to create a PHP-based plugin to process cryptocurrency payments (with trackers built in to keep track of the latest price for each currency) for a dark web marketplace, meaning that for those who “don’t have knowledge, no damn king problem.”

The user made it clear that the purpose of their post was to help “skids” (a shortened version of “script kiddies” or hackers with little or no technical knowledge) develop their own dark web marketplace.

“This article is more or less to discuss abuse and being a lazy ass who won’t bother learning languages ​​like python, Javascript or how to make a basic website,” the user wrote.

See also  Villains who weren't always evil

As always, there are caveats and limitations to consider. The tools created so far are fairly basic, but information security professionals have always predicted that AI tools will be most effective at automating lower-level tools and functions that help hackers break into systems and socially engineer victims. It is also clear that at least some of the observed criminals have little or no development experience, which shows how ChatGPT may have limited use cases so far for advanced actors.

Furthermore, Shykevich said ChatGPT still works best when asked in English, and they haven’t seen many examples so far of similar experimentation by Russian-speaking cybercriminals, but even they will eventually find it useful for generating things like more convincing and English-fluent phishing lures, which have long been a barrier for many Russian cybercriminals.

But it also demonstrates how much easier it can make it for low-level actors to develop the tools and knowledge to perform more mid-level hacks. Technically, there are controls built into the program by OpenAI to prevent the use of the program for direct malicious purposes, but many of these controls can be easily bypassed through creative questioning.

“If they can create a script of malware without knowing a single line of code … if someone can just say we want a program that will do ABCD and they get it now, that’s on the bad side, because anyone can do it now and the entry level to become a cybercriminal becomes extremely low,” Shykevich said.

The experimentation by criminals has also led to something of a cat and mouse game. Shykevitch said that an hour after their post was published, OpenAI built in new controls designed to limit ChatGPT’s ability to provide such information.

See also  The esports gaming industry continues to develop technologies

But ChatGPT’s publicly accessible nature means that it is essentially open to crowdsourced attempts to manipulate and circumvent these controls. Shykevicj believes that while it will eventually get to a point where it is much more difficult to openly defy ChatGPT’s security controls, he believes that OpenAI should consider developing some sort of authorization program that will either block a user after making a certain number of attempts to bypass these controls or provide a digital signature that can be used by OpenAI or law enforcement to trace back strains of malware or other malicious actions to a particular user or computer.

Or, OpenAI’s program could travel a similar journey to online payment systems, which were open to all kinds of abuse when they were first introduced to the public, but were slowly made safer through the addition and adaptation of security controls.

But until then, he expects the cat-and-mouse game to continue, and that malicious actors will develop even more use cases and applications over the next few months.

“I think there will be ‘exciting’ times in the next two to three months when we will really start to see maybe more sophisticated things and [hacking campaigns] in the wild using ChatGPT,” he said.

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *