Viral ChatGPT Spurs Propaganda and Hacking Risk Concerns

(Bloomberg) — Ever since OpenAI’s viral chatbot was unveiled late last year, critics have lined up to flag potential abuse of ChatGPT by email scammers, bots, stalkers and hackers.
Most read from Bloomberg
The last warning is particularly glaring: It comes from OpenAI itself. Two of the policy researchers were among the six authors of a new report examining the threat posed by AI-enabled influence operations. (One of them has since left OpenAI.)
“Our bottom line is that language models will be useful to propagandists and will likely transform online influence operations,” according to a blog accompanying the report, which was published Wednesday morning.
Concerns about advanced chatbots do not stop at influence operations. Cybersecurity experts warn that ChatGPT and similar AI models could lower the bar for hackers to write malicious code to target existing or newly discovered vulnerabilities. Check Point Software Technologies Ltd., an Israel-based cybersecurity company, said attackers were already brainstorming on hacking forums how to recreate malware strains or dark web marketplaces using chatbots.
Several cybersecurity experts emphasized that any malicious code delivered by the model is only as good as the user and the questions asked of it. Still, they said it could help less sophisticated hackers with things like developing better decoys or automating post-exploit actions. Another concern is whether hackers develop their own AI models.
WithSecure, a cybersecurity company based in Helsinki, argues in a new report also published on Wednesday that bad actors will soon learn how to play ChatGPT by figuring out how to ask malicious questions that could lead to phishing attempts, harassment and fake news.
“It is now reasonable to assume that any new communications you receive may have been written by a bot,” Andy Patel, intelligence researcher at WithSecure, said in a statement.
A representative for OpenAI did not respond to a request for comment, nor did the OpenAI researchers who worked on the influence operations report. The FBI, National Security Agency and National Security Council declined to comment on the risks of such AI-generated models.
Kyle Hanslovan, who used to create offensive cyber exploits for the US government before starting his own defensive company, Ellicott City, Maryland-based Huntress, was among those who said there are limits to what ChatGPT can deliver. He told Bloomberg News that it was unlikely to create sophisticated new businesses of the kind a nation-state attacker could generate “because it lacks a lot of creativity and finesse.” But like several other security experts, he said it would help non-English speakers craft significantly better phishing emails.
Hanslovan argued that ultimately ChatGPT is likely to give defenders “a little better edge” than attackers.
Juan Andres Guerrero-Saade, senior director of Sentinel Labs at cybersecurity company SentinelOne, said ChatGPT can code better than him when it comes to the painstaking world of reverse engineering and “deobfuscation” — the effort to uncover the secrets and wizardry behind malicious source code.
Guerrero-Saade was so amazed by ChatGPT’s capabilities that he has ditched his teaching curriculum to delve into nation-state hackers. Next week, he said, more than two dozen students in his class at the Johns Hopkins School of Advanced International Studies will hear his belief that ChatGPT can be a force for good.
It can make the building blocks of code readable faster than he can manually, and cheaper than expensive software, he said. Guerrero-Saade said he has asked it to go back and analyze the CaddyWiper malware that targeted Ukraine and find flaws in his and others’ initial analysis.
“There really aren’t that many malware analysts in the world right now,” he said. “So this is a significant force multiplier.”
In the study on AI-enabled advocacy operations, the researchers said their main concerns were that the campaigns could be cheaper, easier to scale, more immediate, more persuasive and harder to identify using the AI tools. The report is an effort by Georgetown University’s Center for Security and Emerging Technology, OpenAI and the Stanford Internet Observatory.
The authors also “outline steps that can be taken before language models are used for large-scale influence operations,” such as teaching AI models how to be “more fact-sensitive,” imposing stricter restrictions on the use of models, and developing AI technology that can identify the work of other AI machines, according to the report and blog.
But the risks are clear from the report, which was started well before the release of ChapGPT. “There are no silver bullets to minimize the risk of AI-generated disinformation,” it concludes.
Most read from Bloomberg Businessweek
©2023 Bloomberg LP