KUALA LUMPUR: YTL Power International Bhd has announced the formation of YTL AI Cloud, a "specialised provider of massive-scale GPU-based accelerated computing".
In a statement, the group said it will deploy and manage one of the world’s most advanced supercomputers on Nvidia Grace Blackwell-powered DGX Cloud – an AI supercomputer for accelerating the development of generative AI.
YTL said it is among the first companies to adopt Nvidia GB200 NVL72, which is a multi-node, liquid-cooled, rack-scale system with fifth-generation NVLink.
The supercomputer will be interconnected by Nvidia Quantum InfiniBand networking platform, which acts as a single GPU with 1.4 exaflops of AI performance and 30TB of fast memory and is designed for the most compute-intensive workloads.
The YTL AI Supercomputer will surpass more than 300 exaflops of AI compute, making it one of the fastest supercomputers in the world.
“Nvidia is working with YTL AI Cloud to bring a world-class accelerated computing platform to Southeast Asia – helping drive scientific research, innovation and economic growth across the region,” said Jensen Huang, founder and CEO of Nvidia.
“This latest supercomputer marks one of the first deployments of the NVIDIA GB200 Grace Blackwell Superchip on DGX Cloud, supporting the growth of accelerated computing in the Asia Pacific region.”
YTL Power International managing director Datuk Seri Yeoh Seok Hong said: "We are proud to be working with NVIDIA and the Malaysian government to bring powerful AI cloud computing to Malaysia.
"We are excited to bring this supercomputing power to the Asia Pacific region, which has been home to many of the fastest-growing cloud regions and many of the most innovative users of AI in the world.”
The YTL AI Supercomputer will be located in a 1,640-acre data centre facility in the YTL Green Data Center Campus, Johor, powered by a renewable energy source from its on-site 500MW solar power facility.
The supercomputer will help meet the demand for highly scalable, high-performance cloudbased solutions for AI/ML workloads.