Is that voice real or AI? This startup says it can tell


Some in the industry have expressed misgivings about the increase of AI companies to combat AI problems. — Image by DC Studio on Freepik

The latest wave of artificial intelligence technology can mimic the voice of almost anyone - the president, a relative or a bank customer.

This is the problem and the opportunity that decade-old audio technology startup Pindrop Security Inc is tackling. The company has long provided voice authentication services to banks and insurers. Last week, it released a new product that it says can detect AI-generated speech in both phone calls and digital media. It’s marketing the feature to media organisations, government agencies and social networks.

Pindrop is one of a growing number of security-minded companies aiming to combat the threat of AI fakes and frauds. These include companies like Protect AI Inc and Sam Altman’s Tools For Humanity Corp, or Worldcoin, which identifies people by using scans of their eyes.

With a specialty in audio, Pindrop made headlines in January for detecting the source of a deepfake of President Joe Biden urging people not to vote in the New Hampshire primary via robocall. The scale of attacks is rising: The company said it’s logged a more than fivefold increase in the number of attempted attacks directed at its customers since last year.

"It’s pretty easy to bring together a voice clone and the spoofing software to effectively seem like somebody else on the phone,” said Rachel Tobac, chief executive officer SocialProof Security.

Pindrop has attracted money from a cadre of high-profile investors, including Andreessen Horowitz and GV. This year, the company raised US$100mil (RM434.80mil) in debt financing from Hercules Capital Inc. The company’s latest valuation is $925 million.

Co-founder Vijay Balasubramaniyan started thinking about the problem of audio fakes after trying to buy a suit while traveling in India as a PhD student. His American bank called him to verify the transaction at around 3am. his time, asking about his social security number. Without a way to verify who the caller was, and without much information from the bank, he ended the call.

"This is crazy,” Balasubramaniyan recalls thinking on his plane ride back to the US. "Phones have existed for so long, since Alexander Graham Bell, and we still don’t have a way to identify what’s on the other end of that interaction.” (He never got the suit.)

Pindrop’s technology works by analysing audio to determine if a voice is really a human, or just human-like. Humans speak by making specific sounds, which form words, Balasubramaniyan said. But machines don’t produce sound the way humans do, and occasionally generate variants that defy the physical limitations of how a human mouth produces sound. Because every second of voice audio has 8,000 samples in it, there are thousands of points at which AI can make a mistake.

"As you get more and more audio, you start seeing these anomalies glaring at you,” said Balasubramaniyan, who added that because all humans make sounds in the same way, their detection software is language agnostic.

The company says its new tool can identify AI-generated audio with 99% accuracy, but there’s still debate within the industry over the limitations of AI detection. For teachers, researchers and social media users, spotting AI text and images has been a beguiling problem as the technology advances. In March, when OpenAI released a tool that can replicate people’s voices, the company suggested in a blog post that businesses should phase out voice-based authentication for accessing bank accounts and other sensitive information.

John Chambers, the former Cisco Systems Inc chief, is a board member at Pindrop, and touted voice ID as an unusually secure form of authentication online. Chambers invested in the startup through his firm, JC2 Ventures. "Voice will be the primary cybersecurity way of identifying you in the future,” he said. When voice is coupled with biometrics and data about the device used, "it will be almost impossible for someone to completely break that,” he said.

Some in the industry have expressed misgivings about the increase of AI companies to combat AI problems. Unless there are laws passed to decrease the amount of personal data available online, the industry might find itself trapped in a perennial fight between good AI and bad AI, said James E. Lee of the Identity Theft Research Center.

As security technology evolves, so will threats. It’s possible that bad actors could train an algorithm to evade the checks that companies like Pindrop use to identify deepfakes, said Andrew Grotto, a cybersecurity policy expert at Stanford University. "You do end up in this arms race, this cat and mouse game between the defenders and threat actors,” Grotto said. – Bloomberg

Follow us on our official WhatsApp channel for breaking news alerts and key updates!
   

Next In Tech News

23andMe settles data breach lawsuit for $30 million
Exclusive-EU break-up order to Google unlikely for now, sources say
Crypto firm Circle to move headquarters to New York City ahead of planned IPO
Meta to start using public posts on Facebook, Instagram in UK to train AI
Uber, Waymo to expand autonomous ride hailing to Austin, Atlanta
Tiger Global plans to join OpenAI funding round, Information reports
MicroStrategy continues bitcoin buying spree, lifting holdings to $9.45 billion
Oracle shares rise as it expects to cross $100 billion in fiscal 2029 sales
Adobe shares slump as weak earnings forecast sparks fears of delayed AI gains
'AI godmother' Fei-Fei Li raises $230 million to launch AI startup

Others Also Read