The chatbot revolution risks becoming a race of the reckless


OpenAI, the developer of ChatGPT, has released its latest model, GPT-4, which appears to be the company’s most ambitious public release yet. — Reuters

AI chatbots are like buses: you’ll wait half an hour in the rain with none in sight, then three come along all at once. In March 2023, OpenAI released its newest chatbot, GPT-4. It’s a name that sounds more like a rally car than an AI (artificial intelligence) assistant but it heralds a new era in computing.

Google responded with Bard, its more grandly named search chatbot. Chinese search giant Baidu launched its cheeky- sounding Ernie Bot. Salesforce demoed its more serious sounding Einstein GPT chatbot. And Snapchat, not to be outdone, announced its My AI chatbot.

It’s now fashionable for every tech platform and enterprise software company to have an AI chatbot providing an intelligent interface to their software. It may soon look and sound like the Hollywood movie Her. We will interact with our smart devices through AI chatbots. We will talk to them. They’ll understand complex and high-level commands. They will remember the context of our conversation. And they’ll intelligently do what we instruct them to do.

We’re still working out what these chatbots can do. Some of it is magical. Writing a complaint letter to the council for an undeserved parking ticket. Or composing a poem for your colleague’s 25th work anniversary. But some of it is more troublesome. Chatbots like ChatGPT or GPT-4 will, for example, make stuff up, confidently telling you truths, untruths and everything in between. The technical term for this, according to experts, is “hallucination”.

The goal isn’t to eliminate hallucination. How else will a chatbot write that poem if it can’t hallucinate? The aim is to prevent the chatbot from hallucinating things that are untrue, especially when they are offensive, illegal or dangerous.

Eventually the problem of chatbots hallucinating untruths is likely to be addressed, along with other issues such as biases, and a lack of references and concerns around copyright when using others’ intellectual property for training the chatbots. Disturbingly, however, tech companies are throwing caution to the wind by rushing to put these AI tools in the hands of the public with limited safeguards or oversight.

For the last few years, tech companies have developed ethical frameworks for the responsible deployment of AI, hired teams of scientists to oversee the application of these frameworks, and pushed back against calls to regulate their activities. But commercial pressure appears to be changing all that.

At the same time that Microsoft announced it was including ChatGPT into all of its software tools, it let go of one of its AI and Ethics teams. Transparency is a core principle at the heart of Microsoft’s responsible AI principles, yet Microsoft has been secretly using GPT-4 within the new Bing search for the last few months.

Google, which had previously not released its chatbot LaMDA to the public due to concerns about possible inaccuracies, appears to have been goaded into action by Microsoft’s announcement that Bing search would use ChatGPT. Google’s Bard chatbot is the result of adding LaMDA to its popular search tool. Deci-ding to build the Bard chatbot proved expensive for Google: a simple mistake in the Bard’s first demo wiped US$100bil (RM440bil) off the share price of Google’s parent company, Alphabet.

OpenAI, the company behind ChatGPT, put out a technical report explaining GPT-4. OpenAI’s core mission is the responsible development of artificial general intelligence – AI that is as smart or smarter than a human. But the OpenAI technical report was more white paper than technical report, having no technical details about GPT-4 or its training data. OpenAI was unashamed in its secrecy, blaming the commercial landscape first and safety second. AI researchers cannot understand the risks and capabilities of GPT-4 if they don’t know what data it is trained on. The only open part of OpenAI now is the name.

There is a fast-opening chasm between what technology companies are disclosing and what their products can do that can only be closed by government action. If these organisations are going to be less transparent and act more recklessly, then it falls upon the government to act. Expect regulation.

We can look to other industry areas for how that regulation might look. In high-risk areas like aviation or pharmacology, there are government bodies with significant powers to oversee new technologies. We can also look to Europe, whose forthcoming AI Act has a significant risk-based focus. Whatever shape this regulation takes, it is needed if we are to secure the benefits of AI while avoiding the risks. – 360info

Prof Toby Walsh is chief scientist of UNSW.AI, the University of New South Wales’ new AI Institute in Sydney, Australia, and a recipient of an Australian Research Council Laureate Fellowship exploring “trustworthy AI”. His most recent book is Machines Behaving Badly: The Morality of AI. This article was originally published under Creative Commons by 360info.

Follow us on our official WhatsApp channel for breaking news alerts and key updates!
   

Next In Focus

How will the rebels rule Syria? Their past offers clues
The dark mystery of France’s most notorious sexual predator
South Korean youth standing up for their rights
Syria on my mind
K-protest charts a nation
Chords of change: Making Malaysian Music Great Again
Do we need a revolution in the Philippines?
Thailand’s role in UN Human Rights Council
Editorial: Is Indonesia abandoning Asean?
There is faith in humour

Others Also Read