The A3 supercomputer's scale can provide up to 26 exaFlops of AI performance, Google says. Google Cloud announced a new supercomputer virtual-machine series aimed at rapidly training large AI models. Unveiled at the Google I/O conference, the new A3 supercomputer VMs are purpose-built to handle the considerable resource demands of a large language model (LLM). “A3 GPU VMs were purpose-built to deliver the highest-performance training for today’s ML workloads, complete with modern CPU, improved host memory, next-generation Nvidia GPUs and major network upgrades,” the company said in a statement. The instances are powered by eight Nvidia H100 GPUs, Nvidia’s newest GPU that just begin shipping earlier this month, as well as Intel’s 4th Generation Xeon Scalable processors, 2TB of host memory and 3.6 TBs bisectional bandwidth between the eight GPUs via Nvidia’s NVSwitch and NVLink 4.0 interconnects. All together, Google is claiming these machines can provide up to 26 exaFlops of power. That’s the cumulative performance of the entire supercomputer, not each individual instance. Still, it blows away the old record for the fastest supercomputer, Frontier, which was just a little over one exaFlop. According to Google, A3 is the first production-level deployment of its GPU-to-GPU data interface, which Google calls the infrastructure processing unit (IPU). It allows for sharing data at 200 Gbps directly between GPUs without having to go through the CPU. This result is a ten-fold increase in available network bandwidth for A3 virtual machines compared to prior-generation A2 VMs. A3 workloads will be run on Google’s specialized Jupiter data center networking fabric, which the company says “scales to tens of thousands of highly interconnected GPUs and allows for full-bandwidth reconfigurable optical links that can adjust the topology on demand.” Google will be offering the A3 in two ways: customers can run it themselves or as a managed service where Google handles most of the work. If you opt to do it yourself, the A3 VMs run on Google Kubernetes Engine (GKE) and Google Compute Engine (GCE). If you go with a managed service, the VMs run on Vertex, the company’s managed machine learning platform. The A3 virtual machines are available for preview, which requires filling out an application to join the Early Access Program. Google makes no promises you will get a spot in the program. Related content news Pure Storage adds AI features for security and performance Updated infrastructure-as-code management capabilities and expanded SLAs are among the new features from Pure Storage. By Andy Patrizio Jun 26, 2024 3 mins Enterprise Storage Data Center news Nvidia teases next-generation Rubin platform, shares physical AI vision ‘I'm not sure yet whether I'm going to regret this or not,' said Nvidia CEO Jensen Huang as he revealed 2026 plans for the company’s Rubin GPU platform. By Andy Patrizio Jun 17, 2024 4 mins CPUs and Processors Data Center news Intel launches sixth-generation Xeon processor line With the new generation chips, Intel is putting an emphasis on energy efficiency. By Andy Patrizio Jun 06, 2024 3 mins CPUs and Processors Data Center news AMD updates Instinct data center GPU line Unveiled at Computex 2024. the new AI processing card from AMD will come with much more high-bandwidth memory than its predecessor. By Andy Patrizio Jun 04, 2024 3 mins CPUs and Processors Data Center PODCASTS VIDEOS RESOURCES EVENTS NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe