Ilya Sutskever on how AI will change and his new startup Safe Superintelligence


FILE PHOTO: AI (Artificial Intelligence) letters and robot hand are placed on computer motherboard in this illustration taken, June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo

SAN FRANCISCO/NEW YORK (Reuters) - Ilya Sutskever, OpenAI's former chief scientist, has launched a new company called Safe Superintelligence (SSI), aiming to develop safe artificial intelligence systems that far surpass human capabilities.

He and his co-founders outlined their plans for the startup in an exclusive interview with Reuters this week.

Sutskever, 37, is one of the most influential technologists in AI and trained under Geoffrey Hinton, known as the "Godfather of AI". Sutskever was an early advocate of scaling - the idea that AI performance improves with vast amounts of computing power - which laid the groundwork for generative AI advances like ChatGPT. SSI will approach scaling differently from OpenAI, he said.

Following are highlights from the interview.

THE RATIONALE FOR FOUNDING SSI

"We've identified a mountain that's a bit different from what I was working [on]...once you climb to the top of this mountain, the paradigm will change... Everything we know about AI will change once again. At that point, the most important superintelligence safety work will take place."

"Our first product will be the safe superintelligence."

WOULD YOU RELEASE AI THAT IS AS SMART AS HUMANS AHEAD OF SUPERINTELLIGENCE?

"I think the question is: Is it safe? Is it a force for good in the world? I think the world is going to change so much when we get to this point that to offer you the definitive plan of what we'll do is quite difficult.

I can tell you the world will be a very different place. The way everybody in the broader world is thinking about what's happening in AI will be very different in ways that are difficult to comprehend. It's going to be a much more intense conversation. It may not just be up to what we decide, also."

HOW SSI WILL DECIDE WHAT CONSTITUTES SAFE AI?

"A big part of the answer to your question will require that we do some significant research. And especially if you have the view as we do, that things will change quite a bit... There are many big ideas that are being discovered.

Many people are thinking about how as an AI becomes more powerful, what are the steps and the tests to do? It's getting a little tricky. There's a lot of research to be done. I don't want to say that there are definitive answers just yet. But this is one of the things we'll figure out."

ON SCALING HYPOTHESIS AND AI SAFETY

"Everyone just says 'scaling hypothesis'. Everyone neglects to ask, what are we scaling? The great breakthrough of deep learning of the past decade is a particular formula for the scaling hypothesis. But it will change... And as it changes, the capabilities of the system will increase. The safety question will become the most intense, and that's what we'll need to address."

ON OPEN-SOURCING SSI’S RESEARCH

"At this point, all AI companies are not open-sourcing their primary work. The same holds true for us. But I think that hopefully, depending on certain factors, there will be many opportunities to open-source relevant superintelligence safety work. Perhaps not all of it, but certainly some."

ON OTHER AI COMPANIES' SAFETY RESEARCH EFFORTS

"I actually have a very high opinion about the industry. I think that as people continue to make progress, all the different companies will realize — maybe at slightly different times — the nature of the challenge that they're facing. So rather than say that we think that no one else can do it, we say that we think we can make a contribution.

(Reporting by Kenrick Cai, Anna Tong and Krystal Hu; Editing by Peter Henderson and Edwina Gibbs)

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

   

Next In Tech News

Why AI is better than humans at talking people out of their conspiracy theory beliefs
Sleepless in the digital age
Opinion: When is it time for a new phone?
‘Monster Hunter Now’ launches Season 3 featuring cooking, the Heavy Bowgun and Magnamalo
Disney, DirecTV reach deal, restoring programming for 11 million satellite TV viewers
Review: A new book chronicles the battle over AI, but fails to question whether AI is worth battling over
'50 messages in 1 hour': UAE parents, teachers debate impact of school WhatsApp groups
United Airlines taps Elon Musk's Starlink for in-flight internet
Exclusive-OpenAI's stunning $150 billion valuation hinges on upending corporate structure, sources say
Intel qualifies for $3.5 billion in grants to make chips for US military, Bloomberg News reports

Others Also Read