Americas

  • United States

Why NVMe over Fabric matters

News Analysis
May 18, 20184 mins
Computers and Peripherals

With NVMe over Fabric, a big limitation on SSDs in the enterprise is going away. This will be worth the upgrade.

ssd
Credit: Thinkstock

In my earlier blog post on SSD storage news from HPE, Hitachi and IBM, I touched on the significance of NVMe over Fabric (NoF). But not wanting to distract from the main storage, I didn’t go into detail. I will do so with this blog post.

Hitachi Vantara goes all in on NVMe over Fabric

First, though, an update on the news from Hitachi Vantara, which I initially said had not commented yet on NoF. It turns out they are all in.

“Hitachi Vantara currently offers, and continues to expand support for, NVMe in our hyperconverged UCP HC line. As NVMe matures over the next year, we see opportunities to introduce NVMe into new software-defined and enterprise storage solutions. More will follow, but it confuses the conversation to pre-announce things that customers cannot implement today,” said Bob Madaio, vice president, Infrastructure Solutions Group at Hitachi Vantara, in an email to me.

Hitachi has good reason to get NoF religion like everyone else. NoF is a game-changer. There are two primary interfaces for SSD, SATA and PCI Express. There’s also Serial Attached SCSI (SAS), but for the most part, people used SATA.

SATA is a legacy hard-drive interface dating back to 2001 that even a cheap consumer SSD can easily max out. For a while, I did SSD reviews for a consumer-oriented enthusiast site and pretty much every SSD was maxed out at a certain level of read and write performance. It didn’t matter if it was a “high end” drive or “midrange.” Read/write performance was always within a narrow range. SSD chips were getting faster, but the SATA bus was a huge bottleneck.

The fact is the SATA bus is stuck at revision 3.0, despite the working group bumping it to version 3.3. Most motherboards and SSDs are rev 3.0, playing it safe for maximum compatibility, and that’s a 6Gbit/sec. interface from 2009. Great for a laptop. Not so great for a server.

The Power of NVMe

For the best throughput, you need a PCIe-based card, which has much greater bandwidth than SATA. NVMe was designed for the massively parallel transfer capabilities of SSD memory that SATA just can’t handle. NVMe is a data transfer protocol designed to work with PCI Express to overcome the limits of SATA. NVMe can handle up to 64,000 data queues, and each queue can process 64,000 commands at the same time. SATA can hold only a single queue with 256 commands.

Well, the NVM Express 1.3 spec introduced last year adds support for NVMe over Fabric, for supporting protocols other than PCI Express like InfiniBand. Up until now, PCIe SSDs would work only in the physical server in which they were placed. One server couldn’t see a PCIe card in another server because PCIe is a point-to-point transport protocol never intended to be used in storage. It was meant for GPUs and network cards, with high throughput requirements.

Plus, every PCI Express-based SSD had a custom driver that was slightly different from the rest, so you could not build a storage array with a mix of PCIe cards. You had to buy them all from one vendor.

In short, PCIe SSD was a real headache.

In addition to NoF, NVMe 1.3 added virtualization namespaces support, so now you can build an all-flash storage array for a virtualized system, something not possible before. Up to now, you had to run a virtualized environment on a HDD-based array instead of flash. So, your virtualized systems are going to get a lot faster and support a lot more throughput.

So, you can see why all of the hardware OEMs have gotten the NVMe over Fabric religion, and why you should make sure it’s on your shopping checklist as well.