“AI PCs” are all the rage in the computer industry right now, at least if you listen to anyone trying to get you to buy a new laptop. Neural processing units or NPUs are the new hotness, powering local AI capabilities…though whether that’s actually worth anything is still in question. But Nvidia wants people to know that if you want real AI power, you need a discrete GPU.
That’s the basis of a presentation leaked by Chinese site BenchLife.info and spotted by VideoCardz.com. In the slides posted to the site (and marked “Nvidia Confidential — Do Not Distribute”, so consider all of this unconfirmed), Nvidia lays out the difference between computers that feature “Basic AI” and “Premium AI.” That difference is, in short, a dedicated and discrete graphics card, the kind that Nvidia sells.
Of course the actual presentation was a little more substantive. The slides point out that the NPUs integrated into the latest Intel and AMD processors are capable of 10-45 Tera Operations Per Second (TOPS) while a “Premium AI PC” with an RTX card can handle 100 to 1300 TOPS. And Nvidia also points out that while NPUs are still an extremely recent development, with less than a million units in consumers’ hands as of the end of 2023, Nvidia RTX-powered computers had reached over 100 million units by that point.
Now it goes without saying that this definition of “Premium AI PC” is self-serving for Nvidia, in the same way Microsoft’s definition of an AI PC (complete with a dedicated Copilot button on the keyboard) was. And even Nvidia isn’t saying a discrete graphics card is the be-all, end-all of AI — the same slide says that you need data centers powered by “Cloud GPUs” are needed for Heavy AI, presumably handling most of the generative tools that most people consider AI.
But it’s worth pointing out that Nvidia is also selling those cloud GPUs, and a hell of a lot of them, too. The company is projected to earn $24 billion in revenue in Q1 2024, a 600 percent increase over the same quarter in the previous year.