Google CEO warns against rush to deploy AI without oversight


Asked in a 60 Minutes interview about what keeps him up at night with regard to AI, Pichai said ‘the urgency to work and deploy it in a beneficial way, but at the same time it can be very harmful if deployed wrongly’. — Bloomberg

Alphabet Inc and Google chief executive officer Sundar Pichai said in an interview broadcast Sunday that the push to adopt artificial intelligence technology must be well regulated to avoid potential harmful effects.

Asked in a 60 Minutes interview about what keeps him up at night with regard to AI, Pichai said “the urgency to work and deploy it in a beneficial way, but at the same time it can be very harmful if deployed wrongly.”

Mountain View, California-based Google has been among the leaders in developing and implementing AI across its services. Software like Google Lens and Google Photos rely on the company’s image-recognition systems, while its Google Assistant benefits from natural language processing research that Google has been doing for years.

Still, its pace of deploying the technology has been deliberately measured and circumspect, whereas OpenAI’s ChatGPT has opened up a race to move forward with AI tools at a much faster clip.

“We don’t have all the answers there yet, and the technology is moving fast,” Pichai said. “So does that keep me up at night? Absolutely.”

Google is now playing catch-up in looking to infuse its products with generative AI – software that can create text, images, music or even video based on user prompts. ChatGPT and another OpenAI product, Dall-E, showed the technology’s potential, and countless businesses from Silicon Valley to China’s Internet leaders are now getting involved in presenting their own offerings.

Former Google CEO Eric Schmidt urged global tech companies to come together and develop standards and appropriate guardrails, warning that any slowdown in development would “simply benefit China.”

Despite the sense of urgency in the industry, Pichai cautioned against companies being swept up in the competitive dynamics. And he finds lessons in the experience of OpenAI’s more direct approach and debut of ChatGPT.

“One of the points they have made is, you don’t want to put out a tech like this when it’s very, very powerful because it gives society no time to adapt,” Pichai said. “I think that’s a reasonable perspective. I think there are responsible people there trying to figure out how to approach this technology, and so are we.”

Among the risks of generative AI that Pichai highlighted are so-called deepfake videos, in which individuals can be portrayed uttering remarks that they did not in fact give. Such pitfalls illustrate the need for regulation, Pichai said.

“There have to be consequences for creating deepfake videos which cause harm to society,” he said. “Anybody who has worked with AI for a while, you know, you realise this is something so different and so deep that we would need societal regulations to think about how to adapt.” – Bloomberg

Follow us on our official WhatsApp channel for breaking news alerts and key updates!
   

Next In Tech News

Britannica didn’t just survive. It’s an AI company now
'Who's next?': Misinformation and online threats after US CEO slaying
What is (or was) 'perks culture’?
South Korean team develops ‘Iron Man’ robot that helps paraplegics walk
TikTok's rise from fun app to US security concern
Musk, president? Trump says 'not happening'
Jeff Bezos says most people should take more risks. Here’s the science that proves he’s right
Bluesky finds with growth comes growing pains – and bots
How tech created a ‘recipe for loneliness’
How data shared in the cloud is aiding snow removal

Others Also Read