Alphabet's supplying its TPU to Meta makes it a good inference competitor to Nvidia for specific workloads and not as a replacement for GPUs.

A few months back, I had written that Broadcom (AVGO) was a very viable second major player in artificial intelligence, with its TPUs (Tensor Processing Units) or ASICS (Application Specific Integrated Circuits). Broadcom’s Hock Tan had guided to $70-80Bn revenues by 2030, and I had estimated $52Bn worth of AI revenue for Broadcom by Fiscal Year ending Sep 2027 in this article. Given the strength in AI demand and hyperscaler Capex, I suspect this might be a tad conservative.
The Google Ironwood TPU, which is all over the news today because of a possible big order from Meta (META), is designed by Broadcom. Broadcom, like Alphabet too, has shot up in the last week, at the cost of Nvidia (NVDA) (down 4%), with analysts questioning whether Nvidia would lose out to Alphabet’s lower cost ASICs. Some even questioned whether Nvidia’s business model was in danger.
I don’t agree with that assessment at all, this development is good news for the entire sector. I also think that its much ado about nothing. TPUs are supposed to have a role for specific inference tasks and will continue to proliferate, and it was never correct to assume that Nvidia’s GPUs would continue to have 80% market share in the next decade.
In that regard, Ironwood is an absolute monster of a product. It can reach extremely high FLOPs and scale to thousands of chips with a fast 3D torus interconnect. The New Ironwood (7th gen) is explicitly designed for the “age of generative AI inference,” with big efficiency gains vs prior TPU generations. The Ironwood is a Single Purpose Blade maximized for huge, well-tuned training jobs at Google cloud operations. As Alphabet stated, their goal is to get 10% of Nvidia's revenue, which is hardly the same as replacing Nvidia.
For uniform training jobs inside a single vendor’s stack, like running Gemini scale models, for example, the Ironwood will save you a lot of money. But for mixed workloads across thousands of customers, frameworks, and model types, Blackwell’s flexibility and ecosystem is a huge and I would say, irreplaceable advantage.
GPUs will likely have more than 50-60% market share by 2030, because they are more versatile and general purpose for all AI related workloads. Besides, the TCO (Total Cost of Ownership), among other competitive advantages, such as Nvidia's CUDA platform, switching costs, sheer width and versatility of operations ensure that Nvidia’s GPUs would be impossible to replicate or replace - that kind of product development has taken close to two decades.
Nvidia’s support of neoclouds, Nebius (NBIS) and Coreweave (CRVW) was to widen the market and spread AI to smaller players, and ate into Google’s, Microsoft’s (MSFT) and Amazon’s cloud operations. The counter attack into Nvidia’s chip domain is a natural response. But the market’s knee jerk reaction by beating down Nvidia’s price down 7% in the morning, suggests a lot of skittishness - not worrisome in the long term, but it does indicate that there are a lot of skeptics willing to sell out at anything remotely difficult to Nvidia’s hegemony. That is a result of exhaustion and weaker hands in the market, and shouldn’t be confused with the fundamentals with either company.
I remain long on all three, Alphabet, Nvidia and Broadcom.

Nebius' masterly execution as an integrated neocloud player allows it to borrow at very cheap interest rates with very little shareholder dilution.

Nebius is executing brilliantly as an integrated neocloud player with tremendous reach, value addition and strong pricing power.