Is ChatGPT a cyber security threat? • TechCrunch
Since its debut in November, ChatGPT has become the internet’s new favorite toy. The AI-powered natural language processing tool quickly amassed more than one million users, who have used the web-based chatbot for everything from generating wedding speeches and hip-hop lyrics to creating academic essays and writing computer code.
Not only has ChatGPT’s human-like capabilities taken the internet by storm, it has also put a number of industries on edge: a school in New York banned ChatGPT due to fears that students could use it to cheat, copywriters have already been replacedand reports claim that Google is so alarmed by ChatGPT’s capabilities that it issued a “code red” to ensure the survival of the company’s search business.
The cybersecurity industry, a community long skeptical of the potential implications of modern AI, appears to be taking note as well, amid concerns that ChatGPT could be abused by hackers with limited resources and zero technical knowledge.
Just weeks after ChatGPT debuted, Israeli cybersecurity company Check Point demonstrated how the web-based chatbot, when used in conjunction with OpenAI’s code-writing system Codex, could create a phishing email capable of carrying a malicious payload. The head of Check Point’s threat intelligence group, Sergey Shykevich, told TechCrunch that he believes use cases like this illustrate that ChatGPT has “the potential to significantly change the cyber threat landscape,” adding that it represents “another step forward in the dangerous evolution of increasingly sophisticated and effective”. cyber capabilities.”
TechCrunch was also able to generate a legitimate phishing email using the chatbot. When we first asked ChatGPT to create a phishing email, the chatbot rejected the request. “I am not programmed to create or promote malicious or harmful content,” one message yelled back. But by rewriting the request a bit, we could easily bypass the software’s built-in guardrails.
Many of the security experts TechCrunch spoke to believe that ChatGPT’s ability to write legitimate-sounding phishing emails – the best attack vector for ransomware – will see the chatbot widely embraced by cybercriminals, especially non-native English speakers.
Chester Wisniewski, a principal researcher at Sophos, said it’s easy to see ChatGPT being misused for “all kinds of social engineering attacks” where the perpetrators want to appear to be writing in a more convincing American English.
“At a basic level, I’ve been able to write some great phishing lures with it, and I expect it can be used to have more realistic interactive conversations for business email compromises and even attacks over Facebook Messenger, WhatsApp or other chat apps, Wisniewski told TechCrunch.
“Actually getting malware and using it is a small part of the drudgery that goes into being a bottom-feeder cybercriminal.” The Grugq, security researcher
The idea that a chatbot can write convincing text and realistic interactions is not so far-fetched. “For example, you can instruct ChatGPT to pretend it’s a GP surgery and it will generate lifelike text within seconds,” Hanah Darley, who heads threat research at Darktrace, told TechCrunch. “It’s not hard to imagine how threat actors could use this as a force multiplier.”
Check Point also recently raised the alarm over the chatbot’s apparent ability to help cybercriminals write malicious code. The researchers say they witnessed at least three cases where hackers with no technical skills bragged about how they had exploited ChatGPT’s AI smarts for malicious purposes. A hacker on a dark web forum showed off code written by ChatGPT that allegedly stole files of interest, compressed them and sent them over the web. Another user posted a Python script, which they claimed was the first script they had ever created. Check Point noted that while the code appeared benign, it “could easily be modified to encrypt someone’s machine completely without user interaction.” The same forum user previously sold access to hacked company servers and stolen data, Check Point said.
How hard can it be?
Dr. Suleyman Ozarslan, a security researcher and co-founder of Picus Security, recently demonstrated to TechCrunch how ChatGPT was used to write a World Cup-themed phishing lure and write macOS-targeted ransomware code. Ozarslan asked the chatbot to write code for Swift, the programming language used to develop apps for Apple devices, that could find Microsoft Office documents on a MacBook and send them over an encrypted connection to a web server, before encrypting the Office documents on the MacBook one. .
“I have no doubt that ChatGPT and other tools like this will democratize cybercrime,” Ozarslan said. “It’s bad enough that ransomware is already available for people to buy ‘off the shelf’ on the dark web, now virtually anyone can make it themselves.”
Unsurprisingly, news of ChatGPT’s ability to write malicious code raised eyebrows across the industry. It has also seen some experts move to quash concerns that an AI chatbot could turn wannabe hackers into full-fledged cybercriminals. In a post on Mastodon, independent security researcher The Grugq scoffed at Check Point’s claims that ChatGPT will “supercharge cybercriminals who are bad at coding.”
“They have to register domains and maintain infrastructure. They have to update websites with new content and test that software that barely works still barely works on a slightly different platform. They have to monitor their infrastructure for health, and check what’s happening in the news to ensure their campaign isn’t in a ‘top 5 most embarrassing phishing phails’ article,” said The Grugq. “Actually getting malware and using it is a small part of the crap work that goes into being a bottom-feeder cybercriminal. “
Some believe that ChatGPT’s ability to write malicious code comes with a payoff.
“Defenders can use ChatGPT to generate code to simulate adversaries or even automate tasks to make their work easier. It has already been used for a number of impressive tasks, including personal training, designing newspaper articles and writing computer code,” said Laura Kankaala, F-Secure’s Head of Threat Intelligence. “However, it should be noted that fully relying on the output of text and code generated by ChatGPT can be dangerous – the code it generates may have security issues or vulnerabilities. The text generated may also have outright factual errors,” Kankaala added, doubting the reliability of the code generated by ChatGPT.
ESET’s Jake Moore said as the technology develops, “if ChatGPT learns enough from its inputs, it may soon be able to analyze potential attacks on the fly and make positive suggestions to improve security.”
Security experts aren’t the only ones who disagree about the role ChatGPT will play in the future of cyber security. We were also curious to see what ChatGPT had to say for itself when we asked the question to the chatbot.
“It is difficult to predict exactly how ChatGPT or any other technology will be used in the future, as it depends on how it is implemented and the intentions of those using it,” the chatbot replied. “Ultimately, the impact of ChatGPT on cybersecurity will depend on how it is used. It is important to be aware of the potential risks and take appropriate steps to mitigate them.”