OpenAI reportedly isn’t happy with Nvidia’s GPUs while Nvidia’s $100 billion investment plan in OpenAI is said to have ‘stalled’: Is the AI honeymoon over?

OpenAI reportedly isn’t happy with the performance of Nvidia’s GPUs. Meanwhile, Nvidia is having second thoughts about pumping $100 billion into OpenAI. These are the latest rumours around the two biggest players in AI. So, could their unholy alliance be faltering?

Last week, the Wall Street Journal claimed that Nvidia is rethinking its previously announced plans to invest $100 billion in OpenAI over concerns regarding its ability to compete with the likes of Google and Anthropic.

Then yesterday, Reuters posted a story detailing the reported dissatisfaction of OpenAI with Nvidia’s GPUs, specifically for the task of inferencing AI models. If the latter story looks a lot like somebody at OpenAI hitting back at the original Wall Street Journal claims, the two narratives combined feel like just the sort of tit-for-tat off-the-record briefing that occurs when an alliance is beginning to falter.

For now, none of this is official. It’s all rumour. However, it is true that Nvidia’s intention to invest $100 billion in OpenAI was announced in September and has yet to be finalised.

The Wall Street Journal claims that Nvidia CEO Jensen Huang has “privately criticized what he has described as a lack of discipline in OpenAI’s business approach and expressed concern about the competition it faces from the likes of Google and Anthropic.”

In public, Huang has defended Nvidia’s intentions when it comes to investments in OpenAI, but has stopped short of explicitly reconfirming the $100 billion deal. “We will invest a great deal of money, probably the largest investment we’ve ever made,” he said. But he also retorted, “no, no, nothing like that,” when queried whether that investment would top $100 billion.

As for OpenAI, Reuters says that it is, “unsatisfied with some of Nvidia’s latest artificial intelligence chips, and it has sought alternatives since last year.” It’s claimed that OpenAI is shifting its emphasis away from training AI in favour of inference or running AI models as services for customers.

It’s for that latter task, inference, that OpenAI is said to have found Nvidia’s GPUs wanting. “Seven sources said that OpenAI is not satisfied with the speed at which Nvidia’s hardware can spit out answers to ChatGPT users for specific types of problems such as software development and AI communicating with other software,” Reuters claims.

It’s certainly a somewhat plausible narrative. You could argue that Nvidia’s GPUs are big, complex, relatively general-purpose hardware that’s suboptimal for the specific task of inference.

The AI industry is arguably replete with frenemies… (Image credit: Nvidia)

By way of example, Microsoft has recently announced its latest ASIC, or Application Specific Integrated Circuit, specifically for inferencing. ASICs are chips designed to do a single, narrowly designed task very efficiently. And it’s probably fair to say that, in the long run, most industry observers think that AI inferencing, at the very least, will be run on ASICs rather than GPUs.

A handy parallel case study of the power of ASICs is cryptocurrency mining. That too used to be done on GPUs. But ASICs are now far, far more effective.

Anywho, it’s perhaps inevitable that the OpenAI-Nvidia love-in would falter to some degree. Both companies have a whiff of “world domination” about them and, in the end, their interests are never going to align perfectly.

As per the Wall Street Journal report, it’s very likely Nvidia will still invest billions in OpenAI. And for now, no doubt OpenAI has little choice but to keep buying billions of dollars’ worth of Nvidia GPUs. But if these stories have any truth in them, the honeymoon is probably over.

Leave a Reply

Your email address will not be published.

Previous post It looks like XeSS 3 and multi-frame generation can be enabled on older Intel GPUs with the most minor of tweaks, so it’s not just Panther Lake that gets in on the fun
Next post ‘We had 16 megahertz CPUs, 640k of RAM, floppy disks’: John Carmack reflects on the hardware that made Wolfenstein 3D