Cloud service provider Lambda is working to build a GPU cloud for AI workloads. Credit: Shutterstock AI cloud service provider Lambda has scored a $320 million cash infusion to build out its GPU-based services, which provide AI training clusters made up of thousands of Nvidia accelerators. Lambda is the latest cloud company to offer GPU processing – instead of the standard CPU processing – dedicated to all things AI, particularly inference and training. Vultr, CoreWeave and Voltage Park are all offering similar cloud GPU services. Lambda is preparing to deploy “tens of thousands” of Nvidia GPUs, including the current top-of-the-line H100 Hopper accelerators as well as Nvidia’s forthcoming G200 GPU accelerators, which are set to double the performance of the H100. Lambda is also looking to deploy Nvidia’s hybrid GH200 CPU/GPU superchips. Lambda’s stated mission is to build “the #1 AI compute platform in the world,” and to accomplish this, “we’ll need lots of Nvidia GPUs, ultra-fast networking, lots of data center space, and lots of great new software to delight you and your AI engineering team,” it said in a statement announcing the funding. The $320 million Series C funding is led by a number of venture funds, including B Capital, SK Telecom, T. Rowe Price Associates, Inc., and existing investors Crescent Cove, Mercato Partners, 1517 Fund, Bloomberg Beta, and Gradient Ventures, among others. “With this new financing, Lambda will accelerate the growth of our GPU cloud, ensuring AI engineering teams have access to thousands of Nvidia GPUs with high-speed Nvidia Quantum-2 InfiniBand networking,” the company said. This is undoubtedly music to Nvidia CEO Jensen Huang’s ears, as he has been pushing the notion of dedicated AI data centers, called AI factories, that are populated entirely with GPUs rather than the x86 CPUs found in traditional data centers. Additionally, on the most recent earnings call after Nvidia’s blowout quarter, Huang talked at length about the benefits of expanding GPU processing to other fields besides just AI in a move to muscle in on x86 territory. Founded in 2012, Lambda has been working with GPU systems since 2017, when it first started to experiment with transformer models. Lambda offers co-location services specifically designed for dense deployments as well as resells access to Nvidia’s DGX SuperPODs. The latter is likely to be Lambda’s bread-and-butter, as it is much cheaper to rent AI hardware than purchase and maintain it. This is leading to a rise in AI as a service, which allows customers to rent time on AI-ready equipment rather than buy their own. The real challenge for Lambda may be getting the hardware at all. TSMC is making chips as fast as it can, but demand is enormous and a backlog of several weeks and months remains. Related content news Pure Storage adds AI features for security and performance Updated infrastructure-as-code management capabilities and expanded SLAs are among the new features from Pure Storage. By Andy Patrizio Jun 26, 2024 3 mins Enterprise Storage Data Center news Nvidia teases next-generation Rubin platform, shares physical AI vision ‘I'm not sure yet whether I'm going to regret this or not,' said Nvidia CEO Jensen Huang as he revealed 2026 plans for the company’s Rubin GPU platform. By Andy Patrizio Jun 17, 2024 4 mins CPUs and Processors Data Center news Intel launches sixth-generation Xeon processor line With the new generation chips, Intel is putting an emphasis on energy efficiency. By Andy Patrizio Jun 06, 2024 3 mins CPUs and Processors Data Center news AMD updates Instinct data center GPU line Unveiled at Computex 2024. the new AI processing card from AMD will come with much more high-bandwidth memory than its predecessor. By Andy Patrizio Jun 04, 2024 3 mins CPUs and Processors Data Center PODCASTS VIDEOS RESOURCES EVENTS NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe