For years, the conventional wisdom has been this: The best performing chips for training and running AI models come from Nvidia (GPUs). An alternative is Google’s custom chips, Tensor Processing Units (TPUs), available through Google Cloud.
This dynamic is about to change.
Google is negotiating with major clients to let them run TPUs directly inside their own data centers, according to a new report from The Information. It’s a strategic shift that places them in direct competition with Nvidia.
I discussed this chip war with SmarterX and Marketing AI Institute founder and CEO Paul Roetzer on Episode 183 of The Artificial Intelligence Show.
Historically, Google’s TPUs were only available on the cloud. You couldn't buy them; you could only rent access to them.
Now, Google is pitching a program that would allow huge companies, including Meta and large financial institutions, to deploy these chips within their own enterprises. The Information's report indicates that Meta is already in talks to spend billions for Google’s chips by 2027.
The advantage is simple: On-premise TPUs offer better control for security and compliance, particularly for industries such as finance. To sweeten the deal, Google has developed "TPU Command Center" software designed to loosen Nvidia’s stranglehold on developer tools.
Google reportedly aims to capture up to 10 percent of Nvidia’s revenue through this expansion. And judging by Nvidia's reaction, the threat is being taken seriously.
Shortly after reports of Google's plan surfaced, Nvidia’s stock took a hit. In response, the company’s official X social media account posted a lengthy statement that said they were "delighted by Google's success" while simultaneously listing reasons why Nvidia’s tech was superior.
“This is so not Nvidia,” says Roetzer. “It was such a bizarre tweet. They got roasted for it. It was just instantly meme-worthy.”
This defensive stance suggests Nvidia is feeling the pressure from a competitor with pockets deep enough to challenge them.
While the market reacted with surprise, sending Nvidia’s stock down 5 percent, Roetzer says Google’s ability to challenge Nvidia shouldn’t shock anyone who's paying attention.
“This has been in plain sight forever,” says Roetzer. “TPUs have been used internally since 2015. They were made available in 2018. There's a massive opportunity for them to take a piece of the market.”
Google has been running huge AI workloads on these chips for a decade. By offering them to others in this way, they are leveraging an asset they’ve spent billions perfecting.
While the AI community loves a good winner-loser battle, the reality of the AI boom is that demand for compute is so high that multiple winners can emerge.
“They're both great companies,” says Roetzer. “I still feel pretty good about Nvidia's business model, and I think Google is a great company that is going to do extremely well.”
It seems clear we are in the early innings of a massive infrastructure build-out, and there is likely enough room for both Nvidia and Google to thrive.