OpenAI, still haunted by its chaotic past, is trying to grow up


A cellphone displays a new feature for ChatGPT, OpenAI’s chatbot powered by artificial intelligence, in New York, July 10, 2023. OpenAI, the maker of ChatGPT, is struggling to transform itself into a profit-driven company while satisfying worries about the safety of artificial intelligence. — The New York Times

SAN FRANCISCO: OpenAI, the often troubled standard-bearer of the tech industry’s push into artificial intelligence, is making substantial changes to its management team, and even how it is organised, as it courts investments from some of the wealthiest companies in the world.

Over the past several months, OpenAI, the maker of the online chatbot ChatGPT, has hired a who’s who of tech executives, disinformation experts and AI safety researchers. It has also added seven board members – including a four-star Army general who ran the National Security Agency – while revamping efforts to ensure that its AI technologies do not cause serious harm.

OpenAI is also in talks with investors such as Microsoft, Apple, Nvidia and the investment firm Thrive for a deal that would value it at US$100bil (RM434.25bil). And the company is considering changes to its corporate structure that would make it easier to attract investors.

The San Francisco startup, after years of public conflict between management and some of its top researchers, is trying to look more like a no-nonsense company ready to lead the tech industry’s march into artificial intelligence. OpenAI is also trying to push last year’s high-profile fight over the management of Sam Altman, its CEO, into the background.

But interviews with more than 20 current and former OpenAI employees and board members show that the transition has been difficult. Early employees continue to leave, even as new workers and new executives pour in. And rapid growth hasn’t resolved a fundamental question of what OpenAI is supposed to be: Is it a cutting-edge AI lab created for the benefit of humanity, or an aspiring industry giant dedicated to profits?

Today, OpenAI has more than 1,700 employees, and 80% of them started after the release of ChatGPT in November 2022. Altman and other leaders have led the recruitment of executive hires, while the new chair, Bret Taylor, a former Facebook executive, has overseen the expansion of the board.

“While startups must naturally evolve and adapt as their impact grows, we recognize OpenAI is navigating this transformation at an unprecedented pace,” Taylor said in a statement emailed to The New York Times. “Our board and the dedicated team at OpenAI remain focused on safely building A.I. that can solve hard problems for everyone.”

A number of the new executives played prominent roles in other tech companies. Sarah Friar, OpenAI’s new chief financial officer, was the CEO of Nextdoor. Kevin Weil, OpenAI’s new chief product officer, was the senior vice president of product at Twitter. Ben Nimmo led Facebook’s battle against deceptive social media campaigns. Joaquin Candela oversaw Facebook’s efforts to reduce the risks of artificial intelligence. Now, the two men have similar roles at OpenAI.

OpenAI also told employees Friday that Chris Lehane, a veteran of the Clinton White House who had a senior role at Airbnb and joined OpenAI this year, would be its head of global policy.

But of 13 people who helped found OpenAI in late 2015 with a mission to create artificial general intelligence, or AGI – a machine that can do anything the human brain can do – only three remain. One, Greg Brockman, the company’s president, has taken a leave of absence through the end of the year, citing the need for time off after nearly a decade of work.

“It is pretty common to see these kinds of additions – and also subtractions – but we are under such bright lights,” said Jason Kwon, OpenAI’s chief strategy officer. “Everything becomes magnified.”

Since its earliest days as a nonprofit research lab, OpenAI has struggled with arguments over its goals. In 2018, Elon Musk, its primary backer, departed after a dispute with its other founders. In early 2022, a group of key researchers, worried that commercial forces were pushing OpenAI’s technologies into the marketplace before proper guardrails were in place, left to form a rival AI outfit, Anthropic.

Driven by similar concerns, OpenAI’s board suddenly fired Altman late last year. He was reinstated five days later.

OpenAI has split from many of the employees who questioned Altman and from others who were less interested in building a regular tech company than in doing advanced research. Echoing complaints from other employees, one researcher quit over OpenAI’s efforts to claw back OpenAI shares from employees – potentially worth millions of dollars – if they publicly spoke out against it. OpenAI has since reversed the practice.

OpenAI is driven by two forces that are not always compatible.

On one hand, the company is driven by money – lots of it. Annual revenues have now topped US$2bil (RM8.68bil), according to a person familiar with its income. ChatGPT has more than 200 million users each week – twice the number from nine months ago. It is unclear how much the company is spending each year, though one estimate puts the figure at US$7bil (RM30.39bil). Microsoft, which is already OpenAI’s largest investor, earmarked US$13bil (RM56.44bil) toward the AI company.

But OpenAI is considering making big changes to its structure as it looks for more investments. Right now, the board of the original OpenAI – formed as a nonprofit – controls the organisation, without official input from investors. As part of its new funding discussions, OpenAI is considering changes that would make its structure more appealing to investors, according to three people familiar with the negotiations. But it has not yet settled on a new structure.

OpenAI is also driven by technologies that worry many AI researchers, including some OpenAI employees. They argue that these technologies could help spread disinformation, drive cyberattacks or even destroy humanity. That tension led to a blowup in November, when four board members, including the chief scientist and co-founder Ilya Sutskever, removed Altman.

After Altman reasserted his control, a cloud hung over the company. Sutskever had not returned to work.

(The Times sued OpenAI and Microsoft in December for copyright infringement of news content related to AI systems.)

With another researcher, Jan Leike, Sutskever had built OpenAI’s “Superalignment” team, which explored ways of ensuring that its future technologies would not do harm.

In May, Sutskever left OpenAI and started his own AI company. Within minutes, Leike also left, joining Anthropic. “Safety culture and processes have taken a back seat to shiny products,” he said. Sutskever and Leike did not respond to requests for comment.

Others have followed them out the door.

“I’m still afraid that OpenAI and other AI companies don’t have an adequate plan to manage the risks of the human-level and beyond-human-level AI systems they are raising billions of dollars to build,” said William Saunders, a researcher who recently left the company.

As Sutskever and Leike departed, OpenAI moved their work under another co-founder, John Schulman. While the Superalignment team had focused on harms that might happen years in the future, the new team explored both near- and long-term risks.

At the same time, OpenAI hired Friar as its chief financial officer (she previously held the same role at Square) and Weil as its chief product officer. Friar and Weil did not respond to requests for comment.

Some former executives, who spoke on the condition of anonymity because they had signed nondisclosure agreements, expressed scepticism that OpenAI’s troubled past was behind it. Three of them pointed to Aleksander Madry, who once led OpenAI’s Preparedness team, which explored catastrophic AI risks. After a disagreement over how he and his team would fit into the larger organisation, Madry moved to a different research project.

As some employees departed, they were asked to sign legal papers that said they would lose their OpenAI shares if they spoke out against the company. This incited new concerns among the staff, even after the company revoked the practice.

In early June, a researcher, Todor Markov, posted a message on the company’s internal messaging system announcing his resignation over the issue, according to a copy of the message viewed by the Times.

He said OpenAI’s leadership had repeatedly misled employees about the issue. Because of this, he argued, the company’s leadership could not be trusted to build AGI – an echo of what the company’s board had said when it fired Altman.

“You often talk about our responsibility to develop AGI safely and to distribute the benefits broadly,” he wrote. “How do you expect to be trusted with that responsibility?”

Days later, OpenAI announced that Paul M. Nakasone, a retired US Army general, had joined its board. On a recent afternoon, he was asked what he thought of the environment he had stepped into, given that he was new to the AI field.

“New to AI? I am not new to AI,” he said in a phone interview. “I ran the NSA I have been dealing with this stuff for years.”

Last month, Schulman, the co-founder who helped oversee OpenAI’s new safety efforts, also resigned from the company, saying he wanted to return to “hands-on” technical work. He also joined Anthropic.

“Scaling a company is really hard. You have to make trade-off decisions all the time. And some people might not like those decisions,” Kwon said. “Things are just a lot more complicated.” – The New York Times

Follow us on our official WhatsApp channel for breaking news alerts and key updates!
   

Next In Tech News

India restricts WhatsApp sharing data with other Meta entities, imposes $25.4 million fine
Goldman Sachs looking to spin out its digital assets platform, source says
Facebook users affected by data breach eligible for compensation, German court says
Tesla gains on report Trump's team planning federal self-driving vehicle regulations
Roblox tightens messaging rules for under-13 users amid abuse concerns
Nvidia's Blackwell revenue in focus as sales growth slows
South Africa's MTN exploring partnerships with satellite-internet providers
Xiaomi posts jump in third-quarter revenue, beats estimates
Could artificial general intelligence emerge as soon as 2025?
PS5 Pro review: Is Sony's flashier console worth the steep price?

Others Also Read