Americas

  • United States
U.S. Correspondent

Nvidia’s new Volta-based DGX-1 supercomputer puts 400 servers in a box

News
May 10, 20173 mins
Data CenterServers

The Nvidia supercomputer will ship for $149,000 in the third quarter

You won’t need to buy a rack of 400 servers if you have one high-powered Nvidia DGX-1 supercomputer with a Volta GPU sitting on your desktop.

The DGX-1 supercomputer — which looks like a regular rack server — gets most of its computing power from eight Tesla V100 GPUs.

The GPU, the first one based on the brand-new Volta architecture, was introduced at the company’s GPU Technology Conference in San Jose, California, on Wednesday.

“It comes out of the box, plug it in and go to work,” said Nvidia’s CEO Jen-Hsun Huang during a keynote speech.

But the DGX-1 with Tesla V100 computer is expensive. At US$149,000, it’s worth some people’s life savings. But Huang encouraged people to order it, saying the box will ship in the third quarter.

The new supercomputer 40,960 CUDA cores, which Nvidia says equals the computing power of 800 CPUs. It replaces the previous DGX-1 based on the current Pascal architecture, which has the power of 250 two-socket servers, according to Nvidia.

Nvidia says the system delivers about 960 teraflops of half-precision — 16-bit floating point — performance, which means lower single-precision and double-precision performance. The numbers weren’t available, but half-precision performance is considered valuable for machine-learning tasks.

Accompanying the GPUs are two 20-core Intel Xeon E5-2698 v4s running at clock speeds of 2.2GHz. The system has four 1.92TB SSDs and runs on Ubuntu Linux.

The system draws 3,200 watts of power, so don’t keep it running all day long, or it’ll run up your electricity bill.

Gamers shouldn’t get excited about the machine. The DGX-1 with Tesla V100 is perhaps too expensive to be a huge gaming rig; it is instead designed more for machine learning.

GPUs already power machine-learning tasks in data centers, and the Nvidia supercomputer is an example of how GPUs are making applications like image recognition and natural language processing a reality.

Huang said CPUs do not provide enough power for computing, especially for artificial intelligence, which is where GPU fit in.

The Tesla V100 in the DGX-1 is five times faster than the current Pascal architecture, Huang said. It will have new technologies like NVLink 2.0, a new interconnect with bandwidth of up to 300Gbps (bits per second). The GPU has more than 21 billion transistors and 5,120 cores. It also has 900GBps (bytes per second) of HBM2 memory bandwidth.

Nvidia has also included a cube-like Tensor Core, which will work with the regular processing cores to improve deep learning. Nvidia focused on structuring cores to speed up matrix multiplications, which are the heart of effective deep-learning systems. The structure will help align low-level floating-point calculations, which should speed up deep learning.

Huang boasted the GPU offers 120 teraflops of deep-learning performance, though that’ll be hard to verify. Standard benchmarking tools don’t exist for machine- or deep-learning applications, though development is underway at companies like Google.

The supercomputer works with many high-performance computing and deep-learning frameworks like CUDA, Tensor, and Caffe2.

The graphics company also introduced the DGX Station, which is a smaller version of the new DGX-1. It looks more like a workstation and has four Tesla V100 GPUs, half that of the DGX-1. It is priced at $69,000 and will ship in the third quarter.

Nvidia didn’t immediately say if the products will be shipped worldwide.