Here's how cyber criminals are using ChatGPT


Most of the ChatGPT scams so far have involved phishing and impersonation. For example, scammers posing as Amazon send out emails notifying customers that accounts have been deactivated and later requesting personal information. — Photo by Rolf van Root on Unsplash

A new artificial intelligence tool that has been used in classrooms, online forums and social media posts is now being used to steal your private information and money.

ChatGPT has gained a lot of attention for its ability to generate realistic human responses to text-based input, particularly in academia.

So far, it’s been used for multiple legitimate purposes. Some major companies have turned to the tool to conduct business.

But the Better Business Bureau explained recently that cyber criminals have also taken advantage of the program’s AI’s capabilities for malicious purposes, like phishing, impersonation and even romance scams.

“Scammers have historically been on the cutting edge of technology and I don’t see this being any different,” Tom Bartholomy, CEO of the Better Business Bureau of Southern Piedmont and Western North Carolina, said. “As they see that work, as they see people engaging with it, they’re just going to continue to refine it and continue to find other scams that they can feed that same technology into.”

What to look out for

Bartholomy said most of the ChatGPT scams so far have involved phishing and impersonation. For example, scammers posing as Amazon send out emails notifying customers that accounts have been deactivated and later requesting personal information.

“One of the tells that we’ve always cautioned people on when you get an email or you get a text is that if there’s any misspellings or if the grammar’s poor or if the sentence structure is just off...that that can be a pretty good sign that you’re dealing with a scammer. ChatGPT takes all that away,” Bartholomy said. “It’s going to make it easier for the scammers and make it more difficult for us as consumers to be able to discern what’s legitimate and what’s fake.”

Chatbots have been around for years, especially for business customer service assistance. Bartholomy explained that ChatGPT’s advanced conversational model has made it harder for consumers to pick up on red flags.

“That type of technology has been around longer than ChatGPT...where you think you’re engaging with someone on a live chat. It’s actually just a bunch of canned responses until you give them a question that it can answer. Now with ChatGPT, that conversation can continue based on the questions that you have and the database that they’re pulling information from,” Bartholomy said.

How to protect yourself

The Better Business Bureau recommends that online consumers watch for any suspicious activity. – The Charlotte Observer/Tribune News Service

  • Be cautious of unsolicited messages.
  • Verify the identity of the person you’re chatting with by asking for contact information
  • Scrutinize text for any red flags
  • Use two-factor authentication for your online accounts
  • Use a password manager to generate and store strong passwords
  • Be careful when downloading files or clicking on links
Follow us on our official WhatsApp channel for breaking news alerts and key updates!
   

Next In Tech News

US Supreme Court tosses case involving securities fraud suit against Facebook
Amazon doubles down on AI startup Anthropic with $4 billion investment
Factbox-Who are bankrupt Northvolt's creditors?
UK should use new powers to probe Apple-Google mobile browser duopoly, report says
EU regulators scrap probe into Apple's e-book rules after complaint was withdrawn
Hyundai recalls over 145,000 electrified US vehicles on loss of drive power
'World of Warcraft' still going strong as it celebrates 20 years
Northvolt CEO steps down, saying group needs up to $1.2 billion
Bitcoin at record highs, sets sights on $100,000
Ukraine urges gamers not to enter Chernobyl exclusion zone

Others Also Read