Neal Weinberg
Contributing writer, Foundry

Can anybody stop Nvidia?

Feature
26 Sep 202312 mins
Data CenterGenerative AIServers

GPU juggernaut Nvidia has staked out a dominant position in data center AI with a portfolio that spans chips, software and services and a strategically assembled partner ecosystem.

nvidia santa clara headquarters
Credit: Nvidia

When gaming chip maker Nvidia announced a decade ago that it planned a strategic shift to data center AI, there were many questions: Could they build a full-stack, enterprise-grade offering? Was there even a market for AI?

After the company’s latest earnings report, the question is whether anybody can challenge Nvidia as the preeminent AI platform provider for both enterprise and hyperscale data centers.

Through clever acquisitions, internal hardware/software development, and strategic alliances, Nvidia positioned itself perfectly to take advantage of the generative AI frenzy created by the release of ChatGPT late last year. Neither industry-wide chip shortages, nor the collapse of its proposed $40 billion purchase of chip rival Arm Ltd. had any noticeable effect on Nvidia’s phenomenal growth.

“A new computing era has begun. Companies worldwide are transitioning from general-purpose to accelerated computing and generative AI,” Nvidia founder and CEO Jensen Huang said in the company’s earnings statement. “Nvidia GPUs connected by our Mellanox networking and switch technologies and running our CUDA AI software stack make up the computing infrastructure of generative AI.”

The numbers back him up. Nvidia’s second quarter revenues increased from $6.7 billion to $13.5 billion – that’s right, year-over-year revenue doubled. Net income increased from $656 million to $6.1 billion, which is an 854% increase from a year ago and a 202% gain from the previous quarter. Margins hit 70%, as Nvidia was able to charge enterprises and hyperscalers a premium for in-demand GPUs.

Data center revenue came in at $10.3 billion (up 141% in a single quarter) and now constitutes 76% of total revenue. Nvidia is on pace to surpass Cisco in total revenue by the end of the next quarter. Its stock is trading in the range of $490 a share. And, according to IDC, it has an estimated 90% market share in enterprise GPUs, the building blocks of AI systems.

Industry analysts are bullish. Deutsche Bank’s Ross Seymore says, “We continue to believe Nvidia is uniquely suited to benefit from the growth of AI in hardware and potentially software.” Atif Malik at Citi predicts that the market for AI accelerators will “grow at a blistering pace,” with Nvidia boasting “a substantial advantage in AI performance versus AMD.”

Cowen & Co’s Matthew Ramsay predicts that Nvidia revenue could hit $46 billion in 2024 and $65 billion in 2025. “These upward revisions are entirely concentrated in the data center segment,” says Ramsay. He adds, “While we recognize these numbers are extraordinary, we believe there is more than sufficient demand and supply to support revenue growth of this magnitude.”

Alexander Harrowell, principal analyst at Omdia, says, “There are plenty of companies that have a powerful neural-network accelerator chip, but there is only one that has Nvidia’s software ecosystem.” He adds that Nvidia’s ability to create a robust developer community around its core technology gives it a distinct advantage, not unlike what Apple has done with iPhones. “’Developers, developers, developers’ has always been a winning strategy in all things digital. It’s extremely difficult to reverse this once it’s happened,” says Harrowell.

The Nvidia ecosystem

Huang has stated that Nvidia isn’t seeking to take market share away from industry incumbents; it wants to lead the way as enterprises add AI capabilities to their existing CPU-based data centers. That strategy appears to be working because rather than alienating industry heavyweights, Nvidia has successfully threaded the needle, creating a web of partnerships and alliances.

Want to keep your data inhouse and build out your own AI capabilities? Nividia has teamed up with Dell to offer enterprises a complete, on-prem generative AI package that integrates Nvidia’s GPUs, networking, software, and NeMo large language model (LLM) framework with Dell servers, storage and preconfigured designs for specific use cases.

If you’d rather take advantage of the scalability of the cloud and the speed with which you can get up and running, Nvidia has got that covered with its DGX cloud service, running now on the Oracle cloud and expected to be available soon on Microsoft Azure and Google Cloud. DGX cloud is a complete hardware and software package that enables enterprises to create generative AI models using Nvidia technology inside the hyperscaler’s environment.

For organizations concerned about the security risks associated with sending sensitive data into the public cloud, Nvidia has teamed up with VMware on an offering called VMware Private AI Foundation, a fully integrated, ready-to-go generative AI platform that companies can run on premises, in colocation facilities, or in private clouds.

Moving up the stack to AI-driven business applications, Nvidia is working with ServiceNow and Accenture to develop AI Lighthouse, which combines the ServiceNow enterprise automation platform and engine, NVIDIA AI supercomputing and software, and Accenture consulting and deployment services to help enterprises build custom generative AI large language models and applications.

On the developer front, in addition to its own powerful developer community, Nvidia has partnered with Hugging Face, an open-source AI developer community, to give Hugging Face developers building large language models access to the DGX cloud. This will enable developers to train and tune advanced AI models on Nvidia’s supercomputing infrastructure.

How about industrial applications like digital twin and robotics? Nvidia has developed its Omniverse real-time 3D graphics collaboration platform. Patrick Moorhead, CEO of Moor Insights, says, “The availability of Nvidia’s Omniverse in the Microsoft Azure cloud is a big step forward for Nvidia and for enterprise businesses wanting to reap the benefits of digital twin technologies.”

He adds, “Very few companies can do what Nvidia is doing with Omniverse. At the heart of it, Nvidia is building from its powerful advantages in hardware to enable this incredible AI-driven software platform. That makes Omniverse a valuable tool for enterprises looking to streamline their operations and stay ahead of the curve in a rapidly evolving technological landscape.”

Smart cars? The increasingly software-driven automotive industry is also on Nvidia’s radar screen. The company is partnering with MediaTek to develop automotive systems on chips (or chiplets) for OEMs.

The GPU battlefield

Nvidia has a dominant market share in GPUs, far ahead of rivals AMD and Intel, and continues to update its product portfolio with regular releases of even more powerful chips. In the latest quarter, it announced the GH2000 Grace Hopper Superchip for complex AI and high-performance computing workloads, and the L40S GPU, a universal data center processor designed to accelerate the most compute-intensive applications.

But AMD isn’t standing pat. It is challenging Nvidia with its new Instinct MI300X chips and is building a powerful AI accelerator by combining multiple MI300X chiplets with Zen4 CPU chiplets. “The generative AI, large language models have changed the landscape,” said AMD CEO Lisa Su at an event in San Francisco in June. “The need for more compute is growing exponentially, whether you’re talking about training or about inference.”

“When you compare MI300X to the competition, MI300X offers 2.4 times more memory, and 1.6 times more memory bandwidth, and with all of that additional memory capacity, we actually have an advantage for large language models because we can run larger models directly in memory,” Su said.

However, the new AMD chips won’t be shipping in volume until 2024. And Intel continues to lag. In March, Intel announced that it had cancelled the Rialto Bridge generation of GPUs and is postponing the Falcon Shores GPU architecture to 2025.

“There is no meaningful competition for Nvidia’s high-performance GPUs until AMD starts shipping its new AI accelerators in high volumes in early 2024,” says Raj Joshi, senior vice president at Moody’s Investors Services.

Jim Breyer, CEO at Breyer Capital, adds, “From a three-year timeframe, Nvidia is unstoppable; it has a year-and-a-half lead in GPUs.” Breyer adds that from his perspective Nvidia’s biggest challenge isn’t coming from AMD or Intel; it’s Google.

He says Google got off to a slow start, but founders Sergey Brin and Larry Page have reportedly come out of retirement and are back at Google headquarters, working on the company’s AI project called Gemini.

Google is coming at AI more from the search engine perspective, seeking to maintain its Chrome dominance in the face of a challenge from Microsoft, which has integrated Chat-GPT into its Edge browser. (The Microsoft/OpenAI ChatGPT technology runs on Nvidia chips.)

Google also uses Nvidia GPUs but has developed its own TPUs (tensor processing units), application specific ASICs designed for machine learning and AI. And it’s entirely possible that Google could ramp up production of its TPUs and build a full-stack generative AI offering based on its own PaLM 2 large language model.

Similarly, Amazon is developing its own GPUs. In 2015, Amazon bought Israeli chip design start-up Annapurna Labs for $350 million and has developed two types of GPUs – Trainium (designed to handle the compute-intensive training of large language models) and Inferentia (designed for the inference part of the AI equation, which is when an end user queries the LLM.)

Amazon CEO Andy Jassy says that AWS is using Trainium and Inferentia for itself but has also made its more cost-effective accelerators available to customers. He adds that AI models trained with Trainium “are up to 140% faster” than similar GPU systems “at up to 70% lower cost.”

Amazon still buys the vast majority of its AI chips from Nvidia, so it’s not clear how much of a bite Amazon might be able to take out of Nvidia’s chip market share. However, never underestimate Google or AWS. They have technical chops, deep pockets, they each have their own large language models, their own marketplaces and developer communities, and of course, datacenters that can handle the demands of AI applications.

They do face a major challenge, should they decide to directly challenge Nvidia. Stacy Rasgon, senior analyst at Bernstein Research, points out, “Nvidia chips have a massive software ecosystem that’s been built up around them over the last 15 years that nobody else has.”

Potential pitfalls

No technology or seemingly dominant technology provider is invincible, as anyone who owned a Blackberry can attest. There are several factors that could lead to competitors grabbing market share from Nvidia.

Today, Nvidia is pretty much the only game in town, so it can charge significant markups on its chips; a single GPU could run as much as $40,000. Once AMD and Intel get their acts together, they will undoubtedly offer lower-cost alternatives.

In addition, enterprises are always concerned about vendor lock-in, so, over time, they are likely to add a second GPU vendor to the mix. These factors could take market share away from Nvidia, but at the very least they will drive Nvidia to lower its prices, putting pressure on revenue and earnings.

Other potential pitfalls for Nvidia would be becoming a victim of its own success; spreading itself too thin, failing to execute, becoming too arrogant, losing touch with customers. Not to say any of this is happening, but it wouldn’t be the first time that a company suffered self-inflicted wounds.

One key strength for Nvidia is Huang’s steady leadership. A frequent speaker at industry events, Huang is a charismatic presence. At 60, he’s not near retirement age, but should he decide to step away for any reason, the company could face a leadership vacuum.

Another aspect of generative AI that’s gaining attention is power consumption. “Nvidia has the distinction of producing the world’s first chip to draw over a kilowatt of power. The AI era is proving extremely profligate in energy at exactly a time when we can least afford it,” Harrowell says.

Forrester analyst Glenn O’Donnell points out that the technology leaders at a large enterprise might be excited by generative AI, but the CFO might have a different perspective on the notion of spending tons of money and burning lots of energy on something that might be exciting but doesn’t necessarily demonstrate a clear ROI.

Finally, we know that every technology advance eventually gets leapfrogged by the next big thing. Harrowell says disruptors to Nvidia’s leadership position could come from fundamental AI research that develops more efficient ways to do AI than massive large language models. And alternative processor architectures could emerge from companies like Tesla, Apple, Google, IBM, Meta or others.

But for the near term, Nvidia rules. O’Donnell says Nvidia has methodically executed its game plan; they built the chips, created the ecosystem and won the mindshare battle. “There’s really no stopping this juggernaut,” he says. “They will continue to dominate.”

Nvidia in the news:

Exit mobile version