ChatGPT's taste for literary nonsense sparks alarm


Christoph Heilig said he discovered that they consistently rated 'nonsense' higher which could have stark implications for the development of artificial intelligence. — Photo by Solen Feyissa on Unsplash

PARIS: OpenAI's GPT models can often be fooled into declaring that "pseudo-literary" nonsense is great, a German researcher has found.

Christoph Heilig said he discovered that they consistently rated "nonsense" higher – including when their so-called "reasoning" features were activated – which could have stark implications for the development of artificial intelligence.

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

Utility Entergy says revised Meta data-center deal to deliver higher customer savings
Sony to hike PlayStation 5 prices again as memory chip costs surge
NYSE-parent Intercontinental Exchange invests $600 million in Polymarket
SpaceX's listing stirs up social media frenzy, ticker bets
SoftBank secures $40 billion loan to boost OpenAI investments
Austria plans social media ban for children under 14
‘Life Is Strange: Reunion’ finally arrives this week
VW's software partnership with Rivian clears investment hurdle
Nearly half a million customers hit by Lloyds IT glitch that exposed transaction data, committee says
Apple plans to open up Siri to rival AI assistants in iOS 27 update

Others Also Read