Microsoft Corp. announced new tools to help cloud customers build and deploy artificial intelligence (AI) applications, the company’s latest effort to get more revenue from generative AI.
Azure AI Foundry will make it easier to switch between the large language models undergirding artificial intelligence. A customer using an older OpenAI product can try a newer one or switch from OpenAI to tools from Mistral or Meta Platforms Inc., cloud computing chief Scott Guthrie said in an interview. Besides mixing and matching models, clients will be able to ensure applications are working properly and have a decent return on investment.
Microsoft, which announced the new offerings on Tuesday (Nov 19) at its annual Ignite conference in Chicago, is giving away the software in the hopes of persuading corporate customers to buy more of its cloud services.
The company currently has 60,000 customers using Azure AI, a cloud service that lets developers build and run applications using any of 1,700 different AI models. But the process remains cumbersome, and it’s hard to keep up with the constant supply of new models and updates. Customers don’t want to redo their applications every time something novel comes along; nor do they want to switch models without knowing which tasks they’re best suited to.
"What developers are often finding is that each new model – even if it’s in the same family of models – has benefits in terms of better answers or better performance on many things, but you might have regressions on other things,” Guthrie said. "If you’re a business that has got a mission critical application, you don’t want to just flip a switch and hope it works.”
Parts of Foundry come from an older offering called Azure AI Studio. Other features are new, including tools that help companies deploy AI agents, semi-autonomous digital assistants that can take actions on a user’s behalf.
Guthrie said making it easier for customers to switch between models won’t weaken Microsoft’s close partnership with OpenAI. For one thing, he noted, it will now be simpler to choose the most appropriate OpenAI model for each job. Still, Microsoft knows offering choice is key to attracting and retaining clients.
"For a huge number of use cases, the OpenAI models are absolutely the best today in the industry,” he said. "At the same time, there are different use cases, and sometimes people do have different reasons for you wanting to use different things. Choice is also going to be important.”
Even as it tries to persuade customers to invest more in AI, Microsoft has been warning investors that cloud sales growth will decline because the company lacks sufficient data center capacity to meet demand. Guthrie said those constraints are temporary, and the company insists it will have sufficient computing power going forward.
Microsoft, which announced its first homegrown cloud-computing and AI chips at the Ignite conference last year, is also unveiling two new semiconductors. One is a security microprocessor that protects things like encryption and signing keys. Starting next year, the new chip will be installed in every new Microsoft data center server, the company said.
The second offering is a data processing unit, a type of networking chip Nvidia Corp. also makes that transfers data more quickly to computing and AI semiconductors, speeding up tasks. Microsoft and its rivals are chasing increasingly powerful cloud systems to train and run AI models.
"The models are growing so big,” said Rani Borkar, the Microsoft vice president who oversees chip design and development. Each layer of chips, servers, software and other components have to improve and have optimal performance, she said. "You really have to have one plus one plus one plus one be greater than four or five.” Data processing units are part of that, helping speed network and storage performance, while consuming less power, she said. – Bloomberg