BERLIN: AI chatbots can generate text of astonishingly high quality: letters, summaries, essays, stories in a particular writing style, even functioning software code.
But for all the benefits that the technology offers, there’s also the risk that it can be abused by cybercriminals.
The technology "poses novel IT security risks and increases the threat potential of some known IT security threats," Germany's Federal Office for Information Security (BSI) has concluded.
Behind every AI chatbot there’s a language model that can process natural language in written form in an automated manner. Well-known models include OpenAI's GPT and Google's Palm. Palm is used by Google for its chatbot Bard, while GPT is used in ChatGPT and Microsoft's Bing chat.
The known threats that AI language models can further amplify, according to the BSI, include the creation and enhancement of malware, and the creation of spam and phishing emails that exploit human characteristics such as helpfulness, trust and fear (known as social engineering).
Language models can also adapt the writing style of a text to resemble that of a particular organisation or person, thereby making fraudulent emails more convincing.
What's more, the tell-tale spelling and grammatical errors that used to be common in spam and phishing emails are hardly ever found in AI-generated text.
Entirely new problems and threats posed by AI language models that the BSI has identified include the risk that attackers may redirect input from users into a voice model to manipulate chats and grab information from potential victims.
Beyond the realm of phishing and hacking attacks, cybersecurity experts also fear that language models will be misused to produce fake news, propaganda or hate messages.
The ability to imitate writing styles poses a particular danger here: false information could be spread using a style that imitates specific individuals or organisations. Meanwhile machine-generated reviews could be used to promote (or discredit) services or products.
The data used to train a language model could also cause problems, the BSI warns. Questionable content such as disinformation, propaganda or hate speech used in the training set of the language model could be incorporated into the AI-generated text in a linguistically similar manner.
For anyone using an AI chatbot, perhaps the most important point is that it’s never certain that AI-generated content is factually correct. This is in part because a language model can only derive information from the existing texts it has ingested, meaning it's not always up-to-date. At the same time, the aim of the AI is to string words together that are highly likely to appear beside each other, not to state facts.
For all these reasons users should be cautious with AI-generated content. Due to their often error-free generated text, AI language models give the impression of a human-like capability and thus create trust in the content they produce, even though it may be inappropriate, factually incorrect or manipulated. – dpa