AWS claims its new Amazon EC2 M7g and R7g instances provide 25% better performance vs. the past generation of instances. Credit: CIS Amazon Web Services (AWS) has announced availability of its new Amazon EC2 M7g and R7g instances, the latest generation of instances for memory-intensive applications and running Amazons custom Arm processor, known as Graviton3. This is the second offering of Graviton3-based instances from AWS. It previously announced specific instances for compute-intensive workloads last May. Both the M7g and the R7g instances deliver up to 25% higher performance than equivalent sixth-generation instances. Part of the performance bump comes from the adoption of DDR5 memory, which offers up to 50% higher memory bandwidth than DDR4. But there’s also considerable performance gain from the new Graviton3 chip. Amazon claims that compared to instances run on Graviton2, the new M7g and R7g instances offer up to 25% higher compute performance, nearly twice the floating point performance, twice the cryptographic performance, and up to three times faster machine-learning inference. The M7g instances are for general purpose workloads such as application servers, microservices, and mid-sized data stores. M7g instances scale from one virtual CPU with 4GiB of memory and 12.5Gbps of network bandwidth to 64 vCPUs with 256GiB of memory and 30Gbps of network bandwidth. (A GiB is a gibibyte, a different method of measuring storage. The term 1GB implies 1GB of storage, but it actually represents 0.93GB. To avoid confusion and promote accuracy, 1GiB represents 0.93GB, but the term gibibyte hasn’t caught on.) The R7g instances are tuned for memory-intensive workloads such as in-memory databases and caches, and real-time big-data analytics. R7g instances scale from 1 vCPU and 8GB of memory with 12.5Gbps of network bandwidth to 64 vCPUs with 512GB of memory and 30 Gbps of network bandwidth. New AWS AI partnership AWS has also announced an expanded partnership with startup Hugging Face to make more of its AI tools available to AWS customers. These include Hugging Face’s language-generation tool for building generative AI applications to perform tasks like text summarization, answering questions, code generation, image creation, and writing essays and articles. The models will run on AWS’s purpose-built ML accelerators for the training (AWS Trainium) and inference (AWS Inferentia) of large language and vision models.The benefits of the models include faster training and scaling low-latency, high-throughput inference. Amazon claims Trainium instances offer 50% lower cost-to-train vs. comparable GPU-based instances. Hugging Face models on AWS can be used three ways: through SageMaker JumpStart, AWS’s tool for building and deploying machine-language models; the Hugging Face AWS Deep Learning Containers (DLCs); or tutorials to deploy customer models to AWS Trainium or AWS Inferentia. Related content news Pure Storage adds AI features for security and performance Updated infrastructure-as-code management capabilities and expanded SLAs are among the new features from Pure Storage. By Andy Patrizio Jun 26, 2024 3 mins Enterprise Storage Data Center news Nvidia teases next-generation Rubin platform, shares physical AI vision ‘I'm not sure yet whether I'm going to regret this or not,' said Nvidia CEO Jensen Huang as he revealed 2026 plans for the company’s Rubin GPU platform. By Andy Patrizio Jun 17, 2024 4 mins CPUs and Processors Data Center news Intel launches sixth-generation Xeon processor line With the new generation chips, Intel is putting an emphasis on energy efficiency. By Andy Patrizio Jun 06, 2024 3 mins CPUs and Processors Data Center news AMD updates Instinct data center GPU line Unveiled at Computex 2024. the new AI processing card from AMD will come with much more high-bandwidth memory than its predecessor. By Andy Patrizio Jun 04, 2024 3 mins CPUs and Processors Data Center PODCASTS VIDEOS RESOURCES EVENTS NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe