Viral ChatGPT spurs concerns about propaganda and hacking risks


  • AI
  • Thursday, 12 Jan 2023

Several cybersecurity experts stressed that any malicious code provided by the model is only as good as the user and the questions asked of it. — Image by DCStudio on Freepik

Ever since OpenAI’s viral chatbot was unveiled late last year, detractors have lined up to flag potential misuse of ChatGPT by email scammers, bots, stalkers and hackers.

The latest warning is particularly eye-catching: It comes from OpenAI itself. Two of its policy researchers were among the six authors of a new report that investigates the threat of AI-enabled influence operations. (One of them has since left OpenAI.)

"Our bottom-line judgment is that language models will be useful for propagandists and will likely transform online influence operations,” according to a blog accompanying the report, which was published on Wednesday morning.

Concerns about advanced chatbots don’t stop at influence operations. Cybersecurity experts warn that ChatGPT and similar AI models could lower the bar for hackers to write malicious code to target existing or newly discovered vulnerabilities. Check Point Software Technologies Ltd, an Israel-based cybersecurity company, said attackers were already musing on hacking forums how to recreate malware strains or dark web marketplaces using the chatbot.

Several cybersecurity experts stressed that any malicious code provided by the model is only as good as the user and the questions asked of it. Still, they said it could help less sophisticated hackers with such things as developing better lures or automating post-exploitation actions. Another concern is if hackers develop their own AI models.

WithSecure, a cybersecurity company based in Helsinki, argues in a new report also out on Wednesday that bad actors will soon learn how to game ChatGPT by figuring out how to ask malicious prompts that could feed into phishing attempts, harassment and fake news.

"It’s now reasonable to assume any new communication you receive may have been written with the help of a robot,” said Andy Patel, intelligence researcher at WithSecure, in a statement.

A representative for OpenAI didn’t respond to a request for comment, nor did the researchers for OpenAI who worked on the report on influence operations. The FBI, National Security Agency and National Security Council declined to comment on the risks of such AI-generated models.

Kyle Hanslovan, who used to create offensive cyber exploits for the US government before he set up his own defensive company, Ellicott City, Maryland-based Huntress, was among those who said there are limits to what ChatGPT could deliver. He told Bloomberg News it was unlikely to create sophisticated new exploits of the sort a nation-state attacker can generate "because it lacks a lot of creativity and finesse.” But like several other security experts, he said it would help non-English speakers craft markedly better phishing emails.

Hanslovan argued that ChatGPT is ultimately likely to give defenders "a little bit better of an upper hand” than the attackers.

Juan Andres Guerrero-Saade, senior director of Sentinel Labs at the cybersecurity company SentinelOne, said ChatGPT knows code better than him when it comes to the painstaking world of reverse engineering and "deobfuscation” - the effort to uncover the secrets and sorcerers behind malicious source code.

Guerrero-Saade was so astounded by the ChatGPT’s capabilities that he’s thrown out his teaching syllabus for delving into nation-state hackers. Next week, he said more than two dozen students in his class at the Johns Hopkins School of Advanced International Studies will hear his belief that ChatGPT can be a force for good.

It can make legible the building blocks of code quicker than he can manually, and more cheaply than expensive software, he said. Guerrero-Saade said he has been asking it to go back and reanalyse CaddyWiper malware that targeted Ukraine and find errors in his and others’ initial analysis.

"There’s really not that many malware analysts in the world right now,” he said. "So this is a sizeable force multiplier.”

In the study on AI-enabled influence operations, the researchers said their main worries were that the campaigns could be cheaper, easier to scale, instant, more persuasive and harder to identify using the AI tools. The report is an effort by Georgetown University’s Center for Security and Emerging Technology, OpenAI and the Stanford Internet Observatory.

The authors also "outline steps that can be taken before language models are used for influence operations at scale,” such as teaching AI models how to be "more fact sensitive,” imposing stricter restrictions on usage of models and developing AI technology that can identify the work of other AI machines, according to the report and the blog.

But the risks are clear from the report, which was started well before the release of ChapGPT. "There are no silver bullets for minimising the risk of AI-generated disinformation,” it concludes. – Bloomberg

Follow us on our official WhatsApp channel for breaking news alerts and key updates!
   

Next In Tech News

Japan's antitrust watchdog to find Google violated law in search case, Nikkei reports
Is tech industry already on cusp of artificial intelligence slowdown?
What does watching all those videos do to kids' brains?
How the Swedish Dungeons & Dragons inspired 'Helldivers 2'
'The Mind Twisting Quadroids' review: Help needed conquering the galaxy
Albania bans TikTok for a year after killing of teenager
As TikTok runs out of options in the US, this billionaire has a plan to save it
Google offers to loosen search deals in US antitrust case remedy
Is Bluesky the new Twitter for teachers in the US?
'Metaphor: ReFantazio', 'Dragon Age', 'Astro Bot' and an indie wave lead the top video games of 2024

Others Also Read