ChatGPT maker OpenAI says it’s working to reduce bias, bad behaviour


OpenAI is responding to reports of biases, inaccuracies and inappropriate behavior by ChatGPT itself, and criticism more broadly of new chat-based search products now in testing from Microsoft Corp and Alphabet Inc’s Google. — Reuters

OpenAI, the artificial-intelligence research company behind the viral ChatGPT chatbot, said it is working to reduce biases in the system and will allow users to customise its behaviour following a spate of reports about inappropriate interactions and errors in its results.

“We are investing in research and engineering to reduce both glaring and subtle biases in how ChatGPT responds to different inputs,” the company said in a blog post. “In some cases ChatGPT currently refuses outputs that it shouldn’t, and in some cases, it doesn’t refuse when it should.”

OpenAI is responding to reports of biases, inaccuracies and inappropriate behavior by ChatGPT itself, and criticism more broadly of new chat-based search products now in testing from Microsoft Corp and Alphabet Inc’s Google. In a blog post on Wednesday, Microsoft detailed what it has learned about the limitations of its new Bing chat based on OpenAI technology, and Google has asked workers to put in time manually improving the answers of its Bard system, CNBC reported.

San Francisco-based OpenAI also said it’s developing an update to ChatGPT that will allow limited customisation by each user to suit their tastes, styles and views. In the US, right-wing commentators have been citing examples of what they see as pernicious liberalism hard-coded into the system, leading to a backlash to what the online right is referring to as “WokeGPT”.

“We believe that AI should be a useful tool for individual people, and thus customizable by each user up to limits defined by society,” OpenAI wrote on Thursday. “This will mean allowing system outputs that other people (ourselves included) may strongly disagree with. Striking the right balance here will be challenging – taking customisation to the extreme would risk enabling malicious uses of our technology and sycophantic AIs that mindlessly amplify people’s existing beliefs. There will therefore always be some bounds on system behaviour.” – Bloomberg

Follow us on our official WhatsApp channel for breaking news alerts and key updates!
   

Next In Tech News

GlobalFoundries forecasts upbeat Q4 results on strong demand from smartphone makers
Emerson sharpens automation focus with offer for rest of AspenTech in $15 billion deal
Data analytics firm Palantir jumps as AI boom powers software adoption
Tax fraud investigators search Netflix offices in Paris and Amsterdam, says source
Singapore's Keppel to buy Japanese AI-ready data centre
Tesla increases wages for staff at German gigafactory by 4%
Apple explores push into smart glasses with ‘Atlas’ user study
Japan's Kioxia sees flash memory demand almost tripling by 2028
Hacker gets into woman’s email, changes every password, tries to make purchases
Foxconn says Oct revenue +8.59% y/y, Q4 outlook good

Others Also Read