ChatGPT chats are not confidential, so don't tell it your secrets


As more and more people rely on ChatGPT for help with everyday issues, cybersecurity experts are warning users to think twice before sharing information you wouldn't share with a stranger. — Photo: Philipp Brandstädter/dpa

BERLIN: Intimate health issues are something you might rather tell an AI about, instead of waiting at a nearby practice to awkwardly describe it to a doctor. After all, ChatGPT has an answer for virtually any question, right?

And yet: Apart from the fact that ChatGPT is known to provide false answers (some of which can be entirely fabricated), AI chat services should not be considered the best keepers of secrets.

As more and more people rely on ChatGPT and Google's AI, Bard, for help with everyday issues, cybersecurity experts are warning users to think twice before sharing information you wouldn't share with a stranger.

The Germany-based Hasso Plattner Institute (HPI) specializing in IT is warning against the "disclosing sensitive data," as much of the information you share with an AI is being used to help train the systems that power it, and thereby make it more smart.

Any user who shares confidential information with an AI is at risk of giving up their privacy, and a look at ChatGPT policy shows this.

"As part of our commitment to safe and responsible AI, we review conversations to improve our systems and to ensure the content complies with our policies and safety requirements," ChatGPT developer OpenAI says, confirming that employees can see what you write. The company does have a data deletion feature, however.

It's not only personal secrets that are best kept away from an AI, but also company information, and according to the HPI, employees should be wary about giving an AI access to any company data.

Anyone who uploads their company's internal employee data to pep up a presentation or gets ChatGPT to help make sense of the company's financial figures could even be passing on trade secrets.

Cybersecurity research firm Cyberhaven has warned that many companies are leaking sensitive data "hundreds of times each week" as a result of employees oversharing with ChatGPT.

A temporary glitch in ChatGPT had allowed some users to "see the titles of other users' conversation history," OpenAI chief executive Sam Altman confirmed earlier in March. "We feel awful about this," Altman said on Twitter, confirming the bug had been fixed. – dpa

Follow us on our official WhatsApp channel for breaking news alerts and key updates!
   

Next In Tech News

Japan's antitrust watchdog to find Google violated law in search case, Nikkei reports
Is tech industry already on cusp of artificial intelligence slowdown?
What does watching all those videos do to kids' brains?
How the Swedish Dungeons & Dragons inspired 'Helldivers 2'
'The Mind Twisting Quadroids' review: Help needed conquering the galaxy
Albania bans TikTok for a year after killing of teenager
As TikTok runs out of options in the US, this billionaire has a plan to save it
Google offers to loosen search deals in US antitrust case remedy
Is Bluesky the new Twitter for teachers in the US?
'Metaphor: ReFantazio', 'Dragon Age', 'Astro Bot' and an indie wave lead the top video games of 2024

Others Also Read