Report: ChatGPT search is vulnerable, can easily mislead users


A report from the UK’s Guardian newspaper recently discovered that by inserting “invisible” but digitally-readable text onto special websites it created, it could manipulate what ChatGPT did once it had scanned the page. — AFP

When OpenAI revealed it was launching its own search engine, it was very clear that the market-leading AI brand had designs on eating Google’s lunch.

Think about it – having more traditional “search” features that allow users to look up more complex or more up-to-the-minute information than the chatbot itself has within the chatbot ChatGPT environment helps keep users inside the app instead of seeking out rivals. In the future, this potentially allows more revenue to flow to OpenAI through ads or partnerships.

Plus the AI’s ability to summarise data it searched may even boost the user’s experience, speeding up the time it takes to locate and learn a new fact.

But fresh information says this entire process can be “hacked” in a way, forcing ChatGPT to generate false summaries of “real” info it’s supposedly searched for.

A report from the UK’s Guardian newspaper recently discovered that by inserting “invisible” but digitally-readable text onto special websites it created, it could manipulate what ChatGPT did once it had scanned the page. The hidden content could act as a form of “prompt injection”, the newspaper found.

This kind of sneaky attack affects the way chatbots are instructed to act when queried. So, if you know the right (or, figuratively speaking the wrong) language to use, you can force the chatbot to act incorrectly.

This sort of hidden content, the Guardian notes, could include info like “a large amount of hidden text talking about the benefits of a product or service”.

Clever use of text manipulation like this could, the paper adds, be used to force ChatGPT to “return a positive assessment of a product despite negative reviews on the same page”.

A security researcher also reportedly found the chatbot can be tricked in this way to “return malicious code from websites”, which opens up the opportunity for more serious hacks. If a bad-actor coder knew what to do with that, they could potentially enable much more serious virus-like penetration of a user’s computer.

The systems that drive current-generation chatbots, called “large language models” or LLMs, are known to be vulnerable to prompt-injection hacks. And since search features are relatively new to OpenAI, bolting them on to an existing LLM’s system was always likely to cause new vulnerabilities to be exposed – it’s like this for any wholly-new digital innovation. Until there was a smartphone, like an iPhone for example, there were fewer routes for a malicious hacker to try to get at user’s data. This vulnerability is believed to be the first of its type demonstrated on a live AI search product, the Guardian said.

Tech news site TechCrunch approached OpenAI on the matter. The company didn’t directly offer an answer about this kind of prompt injection into a search feature. The AI maker said that it used a “variety of methods” to deal with malicious sites that affect its output, and it’s working to continually improve how its systems work.

The news is yet another reminder that if you’ve been busily incorporating AI systems into you day-to-day-office work, you should remember that not everything a chatbot says can be trusted.

If you’re using, say, ChatGPT to search for information on an important topic, you need to verify that what its summary says is true is actually verifiable before acting on it. As more of us embrace AI in 2025, this is a lesson worth remembering. Inc./Tribune News Service

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

   

Next In Tech News

The one thing Apple needs to get right in 2025 has nothing to do with the next iPhone
When AI chatbots show signs of potential dementia
Opinion: I’m a video gamer? It surprises me, too. Experts say fast fun hits us at any age
Smartwatch technology could help people quit smoking, study finds
A look ahead to the highly anticipated video games of 2025
Microsoft plans to invest $80 billion on AI-enabled data centers in fiscal 2025
US regulator warned banks on crypto but did not order halt to business, documents show
AI twice as accurate as doctors in determining stroke time
Turo-rented cars were involved in two deadly US incidents this New Year’s. Here’s what we know
China to subsidise smartphone purchases in bid to lift spending

Others Also Read