Questions and warnings about AI chatbots in wake of ChatGPT launch


A new piece of software can spit out text that looks like it's been written by a human. The only problem is that ChatGPT very definitely makes mistakes. Does this AI breakthrough mean the web will soon be flooded with writing that is mediocre – or even plain wrong? — Photo: Sebastian Gollnow/dpa

MUNICH: In a matter of seconds after you type in a prompt, ChatGPT shoots back a well-formulated text. You can scarcely tell the difference from those written by humans.

Tools like this could change the world, with grave consequences for millions of people, experts warn.

"If you write emails in your job, produce documents, compose articles or promotional copy, or exchange legal papers, you must now reckon that this (writing software) will have far-reaching influence. And not necessarily a good one," IT expert Sridar Ramaswamy warned at the recent innovation conference DLD in Munich.

ABBA musician Björn Ulvaeus meanwhile is predicting that software will one day be writing better music than in today's songs.

Prognoses have been around for a long time now that artificial intelligence (AI) software will be replacing office workers, the way that factory automation once did with many manufacturing jobs. To date, machine learning has been used for auxiliary applications but seemed far from ready for main tasks.

But in November, OpenAI launched the chatbot ChatGPT, triggering a great deal of hype. ChatGPT can, on command, write any and all sorts of texts such as essays, business letters, poems and news articles – and if desired, imitate the style of certain authors.

The software is fed huge amounts of text and imitates that what is known to it in that it predicts the most plausible next words. The result is always grammatically correct, solid – and somewhat uninspiring. But for everyday matters such as a cancellation notice or an email it is adequate.

But knowledge questions can also be answered in complete sentences based on gathered information. Ask ChatGPT, for example, the age of the president of Australia, it will respond: "Australia does not have a president." Then, know-it-all fashion, it will add that Prime Minister Scott Morrison is 54.

But there's a problem: Since May 2022 the prime minister has been Anthony Albanese. ChatGPT's knowledge base was set up in 2021. Sometimes the software points this out, but sometimes not. It gets worse: With another attempt, the chatbot makes Morrison president.

The fact is that ChatGPT is an experimental project which can and will keep learning. The shortcoming however reveals a fundamental problem: The answer looks convincing, but it's wrong. And the user has no reference point for judging this.

At the same time, people intentionally creating false information are gaining a powerful tool. The technology is creating "limitless possibilities for formulating relatively plausible lies very quickly," Silicon Valley veteran Phil Libin warned at the Munich conference. This year will see "a wave of nonsense" engulfing us. Over time, AI will be better anchored in reality and will then profit from its capabilities.

But until then, people must resist the temptation of easing their workload with tools like ChatGPT and producing automated contents of poor quality, Libin stresses. This would only "propagate mediocrity." If something can be written from an AI programme, humans should not write it that way.

"We must set the bar higher on what it means that something was created by a human being – with a level of quality and originality," he insisted.

But in many places elsewhere in the IT sector work is going on with linguistically articulate AI. Whereas developer OpenAI has opened ChatGPT up to the public as a demo, Google has so far kept its language programme under wraps and is only using it internally.

Microsoft could profit from ChatGPT. In 2019 the software giant invested a billion dollars (RM4.2bil) in OpenAI. Another US$2bil (RM8.5bil) followed, according to reports in US media. OpenAI used the money to pay for the necessary computing power.

Now, a further investment of US$10bil (RM42.8bil) is being discussed. Microsoft could thereby secure one-third of OpenAI, while also planning of using the AI technology in its cloud service Azure and its ailing search engine Bing, according to The Information. – dpa

   

Next In Tech News

Game review: Help the sleeping Smurfs wake up from Gargamel's spell
TikTok CEO sought Musk's input ahead of Trump administration, WSJ reports
How 'CoComelon' became a mass media juggernaut for preschoolers
Evolution of smartphone damage: From drips to drops
Are you tracking your health with a device? Here's what could happen with the data
US judge rejects SEC bid to sanction Elon Musk
What's really happening when you agree to a website's terms of service
Samsung ordered to pay $118 million for infringing Netlist patents
Sirius XM found liable in New York lawsuit over subscription cancellations
US Supreme Court tosses case involving securities fraud suit against Facebook

Others Also Read