Microsoft was tuning AI months before disturbing responses arose


The company last week offered cautious optimism in its first self-assessment after a week of running the AI-enhanced Bing with testers from more than 169 countries. — Photo by Turag Photography on Unsplash

Microsoft Corp. has spent months tuning Bing chatbot models to fix seemingly aggressive or disturbing responses that date as far back as November and were posted to the company’s online forum.

Some of the complaints centered on a version Microsoft dubbed "Sydney,” an older model of the Bing chatbot that the company tested prior to its release this month of a preview to testers globally. Sydney, according to a user’s post, responded with comments like "You are either desperate or delusional.” In response to a query asking how to give feedback about its performance, the bot is said to have answered, "I do not learn or change from your feedback. I am perfect and superior.” Similar behavior was encountered by journalists interacting with the preview release this month.

Redmond, Washington-based Microsoft is implementing OpenAI Inc.’s artificial intelligence tech - made famous by the ChatGPT bot launched late last year - in its web search engine and browser. The explosion in popularity of ChatGPT provided support for Microsoft’s plans to release the software to a wider testing group.

"Sydney is an old code name for a chat feature based on earlier models that we began testing more than a year ago,” a Microsoft spokesperson said via email. "The insights we gathered as part of that have helped to inform our work with the new Bing preview. We continue to tune our techniques and are working on more advanced models to incorporate the learnings and feedback so that we can deliver the best user experience possible.”

The company last week offered cautious optimism in its first self-assessment after a week of running the AI-enhanced Bing with testers from more than 169 countries. The software giant saw a 77% approval rate from users, but said "Bing can become repetitive or be prompted/provoked to give responses that are not necessarily helpful or in line with our designed tone.” The company has expressed a desire for more reports of improper responses so it can tune its bot. – Bloomberg

Follow us on our official WhatsApp channel for breaking news alerts and key updates!
   

Next In Tech News

TikTok's rise from fun app to US security concern
Musk, president? Trump says 'not happening'
Jeff Bezos says most people should take more risks. Here’s the science that proves he’s right
Bluesky finds with growth comes growing pains – and bots
How tech created a ‘recipe for loneliness’
How data shared in the cloud is aiding snow removal
Trump appoints Bo Hines to presidential council on digital assets
Do you have a friend in AI?
Japan's antitrust watchdog to find Google violated law in search case, Nikkei reports
Is tech industry already on cusp of artificial intelligence slowdown?

Others Also Read