High-bandwidth memory nearly sold out until 2026

News
13 May 20243 mins
CPUs and ProcessorsData CenterHigh-Performance Computing

While it might be tempting to blame Nvidia for the shortage of HBM, it’s not alone in driving high-performance computing and demand for the memory HPC requires.

technician evaluating cloud performance
Credit: Shutterstock

South Korean memory manufacturer SK Hynix has announced that its supply of high-bandwidth memory (HBM) has been sold out for 2024 and for most of 2025. Basically, this means that demand for HBM exceeds supply for at least a year, and any orders placed now won’t be filled until 2026.

The news comes after similar comments were made in March by the CEO of Micron, who said that the company’s HBM production had been sold out through late 2025.

HBM memory is used in GPUs to provide extremely fast memory access, much faster than standard DRAM. It is key to the performance of AI processing. No HBM, no GPU cards.

Bottom line: Expect a new supply-chain headache thanks to HBM being unavailable until at least 2026. It doesn’t matter how many GPUs TSMC and Intel make – those cards are going nowhere without memory.

Hynix is the leader in the HBM space with about 49% market share, according to TrendForce. Micron’s presence is more meager, at about 4% to 6%. The rest is primarily supplied by Samsung, which has not made any statement as to availability. But chances are, HBM demand has consumed everything Samsung can make as well.

HBM memory is more expensive to make, more difficult to make, and takes longer to make than standard DRAM. Such fabrication plants, like a CPU fab, take time, and the three HBM makers couldn’t keep up with the explosive demand.

While it is easy to blame Nvidia for this shortage, it’s not alone in driving high-performance computing and the memory needed to go with it. AMD is making a run, Intel is trying, and many major cloud service providers are building their own processors. This includes Amazon, Facebook, Google, and Microsoft. All of them are making their own custom silicon, and all need HBM memory.

That leaves the smaller players on the outside looking in, says Jim Handy, principle analyst with Objective Analysis. “It’s a much bigger challenge for the smaller companies. In chip shortages the suppliers usually satisfy their biggest customers’ orders and send their regrets to the smaller companies. This would include companies like Sambanova, a start-up with an HBM-based AI processor,” he said.

DRAM fabs can be rapidly shifted from one product to another, as long as all products use the exact same process. This means that they can move easily from DDR4 to DDR5, or from DDR to LPDDR or GDDR used on graphics cards. 

That’s not the case with HBM, because only HBM uses a complex and highly technical manufacturing process called through-silicon vias (TSV) that is not used anywhere else. Also, the wafers need to be modified in a manner different from standard DRAM, and that can make shifting their manufacturing priorities very difficult, said Handy.

So if you recently placed an order for an HPC GPU, you may have to wait. Up to 18 months.

Exit mobile version