A software update for AI benchmarking and a new networking chip are the latest developments in AI speeds and feeds. AI and machine learning systems are working with data sets in the billions of entries, which means speeds and feeds are more important than ever. Two new announcements reinforce that point with a goal to speed data movement for AI. For starters, Nvidia just published new performance numbers for its H100 compute Hopper GPU in MLPerf 3.0, a prominent benchmark for deep learning workloads. Naturally, Hopper surpassed its predecessor, the A100 Ampere product, in time-to-train measurements, and it’s also seeing improved performance thanks to software optimizations. MLPerf runs thousands of models and workloads designed to simulate real world use. These workloads include image classification (ResNet 50 v1.5), natural language processing (BERT Large), speech recognition (RNN-T), medical imaging (3D U-Net), object detection (RetinaNet), and recommendation (DLRM). Nvidia first published H100 test results using the MLPerf 2.1 benchmark back in September 2022. It showed the H100 was 4.5 times faster than the A100 in various inference workloads. Using the newer MLPerf 3.0 benchmark, the company’s H100 logged improvements ranging from 7% to 54% with MLPerf 3.0 vs MLPerf 2.1. Nvidia also said the medical imaging model was 30% faster under MLPerf 3.0. It should be noted that Nvidia ran the benchmarks, not an independent third-party. And Nvidia isn’t the only vendor running benchmarks. Dozens of others, including Intel, ran their own benchmarks and will likely see performance gains as well. Network chip for AI The second announcement is from Enfabrica Corp., which has emerged from stealth mode to announce a class of chips called Accelerated Compute Fabric (ACF) processors. Enfabrica said the chips are specifically designed for AI, machine learning, HPC, and in-memory databases to improve scalability, performance and total cost of ownership. Enfabrica was founded in 2020 by engineers from Broadcom, Google, Cisco, AWS and Intel. Its ACF solution was developed from the ground up to address the scaling issues of accelerated computing, which grows more data intensive by the minute. The company claims that these devices deliver scalable, streaming, multi-terabit-per-second data movement between GPUs, CPUs, accelerators, memory and networking devices. The processor eliminates tiers of latency and optimizes bottlenecks in top-of-rack network switches, server NICs, PCIe switches and CPU-controlled DRAM, according to Enfabrica. ACF will offer 50 times the DRAM expansion over existing GPU networks via Compute Express Link (CXL), the high-speed network for sharing physical memory between servers. Enfabrica has not set a release date as of yet but says an update will be coming in the near future. Related content news Pure Storage adds AI features for security and performance Updated infrastructure-as-code management capabilities and expanded SLAs are among the new features from Pure Storage. By Andy Patrizio Jun 26, 2024 3 mins Enterprise Storage Data Center news Nvidia teases next-generation Rubin platform, shares physical AI vision ‘I'm not sure yet whether I'm going to regret this or not,' said Nvidia CEO Jensen Huang as he revealed 2026 plans for the company’s Rubin GPU platform. By Andy Patrizio Jun 17, 2024 4 mins CPUs and Processors Data Center news Intel launches sixth-generation Xeon processor line With the new generation chips, Intel is putting an emphasis on energy efficiency. By Andy Patrizio Jun 06, 2024 3 mins CPUs and Processors Data Center news AMD updates Instinct data center GPU line Unveiled at Computex 2024. the new AI processing card from AMD will come with much more high-bandwidth memory than its predecessor. By Andy Patrizio Jun 04, 2024 3 mins CPUs and Processors Data Center PODCASTS VIDEOS RESOURCES EVENTS NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe