Nvidia’s competitors are targeting the company’s software dominance

Nvidia’s rivals and biggest customers are rallying behind an OpenAI-led initiative to develop software that would make it easier for artificial intelligence developers to move away from its chips.

Silicon Valley-based Nvidia has become the world’s most valuable chipmaker because it has a near monopoly on the chips needed to develop large AI systems. But delivery bottlenecks and high prices are pushing customers to look for alternatives.

However, producing new AI chips only solves part of the problem. While Nvidia is best known for its powerful processors, industry insiders say its “secret sauce” is its Cuda software platform, which enables chips originally designed for graphics to accelerate AI applications.

At a time when Nvidia is investing heavily in expanding its software platform, rivals like Intel, AMD and Qualcomm are targeting Cuda in hopes of poaching customers – with strong support from some of Silicon Valley’s biggest companies.

Engineers from Meta, Microsoft and Google are helping to develop Triton, software for efficiently running code on a variety of AI chips, which OpenAI released in 2021.

Even as they continue to spend billions of dollars on their latest products, major tech companies are hoping Triton will help break the stranglehold that Nvidia has on AI hardware.

“Essentially it breaks the Cuda lock-in,” said Greg Lavender, Intel’s chief technology officer.

Nvidia dominates the market for building and deploying large language models, including the system behind OpenAI’s ChatGPT. This has pushed the company’s value to over $2 trillion, leaving rivals Intel and AMD scrambling to catch up. Analysts expect Nvidia to announce this week that last quarter’s revenue more than tripled from a year ago and profit rose more than sixfold.

But Nvidia’s hardware is only so sought after because the company has spent nearly two decades developing the associated software. This created a huge competitive advantage that the competition found difficult to overcome.

“Nvidia doesn’t make a living [just] Build the chip: We’re building a complete supercomputer, from the chip to the system to the connections… but most importantly, the software,” said CEO Jensen Huang at the GPU Technology Conference in March. He has referred to Nvidia’s software as the “operating system” of AI.

Nvidia was founded more than 30 years ago to target video gamers. Getting started with AI was made easier by the Cuda software, which the company developed in 2006 to allow general applications to run on its graphics processors.

Since then, Nvidia has invested billions of dollars developing hundreds of software tools and services to make running AI applications on its GPUs faster and easier. Nvidia executives say the company is now hiring twice as many software engineers as hardware workers.

“I think people underestimate what Nvidia has actually built,” said David Katz, partner at Radical Ventures, an investor specializing in AI.

“They have built a software ecosystem around their products that is efficient, easy to use and actually works – making very complex things simple,” he added. “It’s something that has evolved over a very long period of time with a huge user community.”

Still, the high price of Nvidia’s products and the long line to buy its most advanced devices, such as the H100 and the upcoming “superchip” GB200, have led some of its biggest customers – including Microsoft, Amazon and Meta – to look for alternatives or develop your own.

However, since most AI systems and applications already run on Nvidia’s Cuda software, it is time-consuming and risky for developers to rewrite them for other processors such as AMD’s MI300, Intel’s Gaudi 3 or Amazon’s Trainium.

“The thing is: if you want to compete with Nvidia in this space, you not only have to build competitive hardware, but you also have to make it easy to use,” said Gennady Pekhimenko, CEO of CentML, a startup. He develops software to optimize AI tasks and is an associate professor of computer science at the University of Toronto. “Nvidia’s chips are really good, but in my opinion their biggest advantage is on the software side.”

Competitors like Google’s TPU AI chips may offer comparable performance in benchmark tests, but “ease of use and software support make a big difference” in Nvidia’s favor, Pekhimenko said.

Nvidia executives argue that its software work allows it to deploy a new AI model on its latest chips in “seconds” and provides continuous efficiency improvements. But these advantages have a catch.

“We’re seeing a lot of Cuda lock-in [AI] ecosystem, making it very difficult to use non-Nvidia hardware,” said Meryem Arik, co-founder of TitanML, a London-based AI startup. TitanML initially used Cuda, but last year’s GPU shortage caused the company to rewrite its products in Triton. Arik said this helped TitanML attract new customers who wanted to avoid what she called the “cuda tax.”

Triton, whose co-creator Philippe Tillet was hired by OpenAI in 2019, is open source, meaning anyone can view, adapt or improve its code. Proponents argue that this makes Triton more attractive to developers than Cuda, which is owned by Nvidia. Originally, Triton only worked with Nvidia’s GPUs, but now it supports AMD’s MI300. Support for Intel’s Gaudi and other accelerator chips is planned soon.

Meta, for example, has made the Triton software the heart of its self-developed AI chip MTIA. When Meta released the second generation of MTIA last month, its engineers said Triton was “highly efficient” and “enough hardware independent” to work with a range of chip architectures.

According to logs on GitHub and conversations about the toolkit, developers from OpenAI competitors like Anthropic — and even Nvidia itself — have also been working on improving Triton.

Triton isn’t the only attempt to challenge Nvidia’s software lead. Intel, Google, Arm and Qualcomm are among the members of the UXL Foundation, an industry alliance that is developing a Cuda alternative based on Intel’s open source platform OneAPI.

Chris Lattner, a well-known former senior engineer at Apple, Tesla and Google, has launched Mojo, a programming language for AI developers whose pitch is: “No Cuda required.” Only a small minority of software developers worldwide know how to program with Cuda, and it is difficult to learn, he argues. With his startup Modular, Lattner hopes Mojo will make it “dramatically easier” to develop AI for “developers of all kinds – not just the elite experts at the largest AI companies.”

“Today’s AI software is based on software languages ​​from over 15 years ago, similar to a BlackBerry today,” he said.

Even if Triton or Mojo are competitive, it will take years for Nvidia’s rivals to catch up to Cuda’s lead. Analysts at Citi recently estimated that Nvidia’s share of the generative AI chip market will decline from about 81 percent next year to about 63 percent in 2030, suggesting that Nvidia will remain dominant for many years to come.

“Building a competitive chip against Nvidia is a difficult but easier problem than building the entire software stack and getting people to use it,” Pekhimenko said.

Intel’s Lavender remains optimistic. “The software ecosystem will continue to evolve,” he said. “I think the playing field will be leveled.”

Additional reporting by Camilla Hodgson in London and Michael Acton in San Francisco

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top