OpenAI claims tool to detect AI-generated images is 99% accurate


Mira Murati, chief technology officer of the maker of popular chatbot ChatGPT and image generator Dall-E, said on Tuesday (Oct 17) that OpenAI’s tool is "99% reliable” at determining if a picture was produced using AI. It’s being tested internally ahead of a planned public release, she said, without specifying a timeline. — AFP

OpenAI is building a tool to detect images created by artificial intelligence with a high degree of accuracy.

Mira Murati, chief technology officer of the maker of popular chatbot ChatGPT and image generator Dall-E, said on Tuesday (Oct 17) that OpenAI’s tool is "99% reliable” at determining if a picture was produced using AI. It’s being tested internally ahead of a planned public release, she said, without specifying a timeline.

Murati spoke alongside OpenAI Chief Executive Officer Sam Altman, as both executives attended the Wall Street Journal’s Tech Live conference in Laguna Beach, California.

There are already a handful of tools that claim to detect images or other content that has been made with AI, but they can be inaccurate. For instance, OpenAI in January released a similar tool intended to determine whether text was AI-generated, but it was shelved in July because it was unreliable. The company said it was working on improving that software and was committed to developing ways to also identify if audio or images were made with AI, too.

The need for such detection tools is only growing in importance as AI tools can be used to manipulate or fabricate news reports of global events. Adobe Inc’s Firefly image generator addresses another aspect of the challenge, by promising to not create content that infringes on intellectual property rights of creators.

On Tuesday, the OpenAI executives also gave a hint about the AI model that will follow GPT-4. Though OpenAI hasn’t said publicly what a follow-up model to GPT-4 might be called, the startup filed an application for a "GPT-5” trademark with the US Patent and Trademark Office in July.

Chatbots such as ChatGPT – which uses GPT-4 and a preceding model, GPT-3.5 – are prone to making things up, also known as hallucinating; when asked whether a GPT-5 model would no longer spout falsehoods, Murati said, "Maybe.”

"Let’s see. We’ve made a ton of progress on the hallucination issue with GPT-4, but we’re not where we need to be,” she said.

Altman also addressed the possibility that OpenAI could design and manufacture its own computer chips for training and operating its AI models, rather than using those provided by companies such as Nvidia Corp, which is currently seen as the market leader.

"The default path would certainly be not to,” he said, "But I would never rule it out.” – Bloomberg

Follow us on our official WhatsApp channel for breaking news alerts and key updates!
   

Next In Tech News

How to escape your doomscroll hellhole
Google Translate rival DeepL launches live translation feature
'Mario & Luigi: Brothership' review: Mario & Luigi energise an island-hopping quest
'Call of Duty: Black Ops 6' review: When war becomes an aesthetic, nobody wins
TikTok parent ByteDance’s valuation hits $300 billion amid US ban uncertainty, WSJ reports
Turkey fines Amazon's Twitch 2 million lira for data breach
What to know about Elon Musk’s contracts with the US federal government
What is DOGE? Houston experts say Trump's new 'department' is not actually a department
Netflix back up for most users in US after outage, Downdetector shows
From a US$1mil DoorDash scam to a massive crypto heist, Gen Z linked to sophisticated online crimes

Others Also Read