Ilya Sutskever on how AI will change and his new startup Safe Superintelligence


FILE PHOTO: AI (Artificial Intelligence) letters and robot hand are placed on computer motherboard in this illustration taken, June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo

SAN FRANCISCO/NEW YORK (Reuters) - Ilya Sutskever, OpenAI's former chief scientist, has launched a new company called Safe Superintelligence (SSI), aiming to develop safe artificial intelligence systems that far surpass human capabilities.

He and his co-founders outlined their plans for the startup in an exclusive interview with Reuters this week.

Sutskever, 37, is one of the most influential technologists in AI and trained under Geoffrey Hinton, known as the "Godfather of AI". Sutskever was an early advocate of scaling - the idea that AI performance improves with vast amounts of computing power - which laid the groundwork for generative AI advances like ChatGPT. SSI will approach scaling differently from OpenAI, he said.

Following are highlights from the interview.

THE RATIONALE FOR FOUNDING SSI

"We've identified a mountain that's a bit different from what I was working [on]...once you climb to the top of this mountain, the paradigm will change... Everything we know about AI will change once again. At that point, the most important superintelligence safety work will take place."

"Our first product will be the safe superintelligence."

WOULD YOU RELEASE AI THAT IS AS SMART AS HUMANS AHEAD OF SUPERINTELLIGENCE?

"I think the question is: Is it safe? Is it a force for good in the world? I think the world is going to change so much when we get to this point that to offer you the definitive plan of what we'll do is quite difficult.

I can tell you the world will be a very different place. The way everybody in the broader world is thinking about what's happening in AI will be very different in ways that are difficult to comprehend. It's going to be a much more intense conversation. It may not just be up to what we decide, also."

HOW SSI WILL DECIDE WHAT CONSTITUTES SAFE AI?

"A big part of the answer to your question will require that we do some significant research. And especially if you have the view as we do, that things will change quite a bit... There are many big ideas that are being discovered.

Many people are thinking about how as an AI becomes more powerful, what are the steps and the tests to do? It's getting a little tricky. There's a lot of research to be done. I don't want to say that there are definitive answers just yet. But this is one of the things we'll figure out."

ON SCALING HYPOTHESIS AND AI SAFETY

"Everyone just says 'scaling hypothesis'. Everyone neglects to ask, what are we scaling? The great breakthrough of deep learning of the past decade is a particular formula for the scaling hypothesis. But it will change... And as it changes, the capabilities of the system will increase. The safety question will become the most intense, and that's what we'll need to address."

ON OPEN-SOURCING SSI’S RESEARCH

"At this point, all AI companies are not open-sourcing their primary work. The same holds true for us. But I think that hopefully, depending on certain factors, there will be many opportunities to open-source relevant superintelligence safety work. Perhaps not all of it, but certainly some."

ON OTHER AI COMPANIES' SAFETY RESEARCH EFFORTS

"I actually have a very high opinion about the industry. I think that as people continue to make progress, all the different companies will realize — maybe at slightly different times — the nature of the challenge that they're facing. So rather than say that we think that no one else can do it, we say that we think we can make a contribution.

(Reporting by Kenrick Cai, Anna Tong and Krystal Hu; Editing by Peter Henderson and Edwina Gibbs)

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

   

Next In Tech News

Nintendo Alarmo clock now available from local retailer, priced at a steep RM899
Cutting-edge AI to find missing relatives at ancient Kumbh Mela
French woman faces cyberbullying after falling for fake Brad Pitt
TikTok preparing for U.S. shut-off on Sunday, The Information reports
TikTok calls report of possible sale to Elon Musk’s X ‘pure fiction’
ChatGPT will soon be able to�remind you to walk the dog
Apple wants to keep diversity programmes disavowed by other US firms
Powerfoyle technology can keep small electronic devices running forever
US SEC sues Elon Musk over late disclosure of Twitter stake
Specially equipped drones for complex, high-risk missions

Others Also Read