Americas

  • United States

Samsung demos 512GB DDR5 memory aimed at supercomputing, AI workloads

News Analysis
Apr 12, 20212 mins
Servers

It's twice as fast as existing memory and has twice the capacity, paving the way for new use cases.

samsung lpddr5
Credit: Samsung

Samsung Electronics last month announced the creation of a 512GB DDR5 memory module, its first since the JEDEC consortium developed and released the DDR5 standard in July of last year.

The new modules are double the max capacity of existing DDR4 and offer up to 7,200Mbps in data transfer rate, double that of conventional DDR4. The memory will be able to handle high-bandwidth workloads in applications such as supercomputing, artificial intelligence, machine learning, and data analytics, the company says.

Samsung has also switched to High-K Metal Gate (HKMG) process technology for the insulation layer, instead of the traditional silicon oxynitride. Intel adopted this for its Penryn generation of CPUs in 2008. It allows for transistor shrinkage while at the same reducing electrical current leakage, thus reducing heat.

That translates to around 13% less power draw than older technologies, and in a dense data center, that can scale to considerable power reduction.

If all goes as planned, DDR5 should come out around the same time as the next generation Intel Xeon Sapphire Rapids and AMD Epyc 7004 Genoa generations, along with some Arm servers like Ampere.

We will also see the advent of other technologies, like PCI Express Gen5 and the CXL interconnect. The Compute Express Link (CXL) protocol is rapidly gaining popularity because it is a mesh, rather than a point-to-point protocol. It allows for memory to be pooled. It also allows processors to access each other’s memory, something PCIe cannot do.

Samsung is currently sampling different variations of its DDR5 memory product family to customers for verification and, ultimately, certification with their products to accelerate AI/ML, exascale computing, analytics, networking, and other data-intensive workloads.