You've reached your free article limit! Create an account or upgrade for unlimited access.

See Plans

You've reached your 20 article limit. Upgrade for unlimited access.

See Plans

Long-Term Investors Should Own Alphabet, Nvidia and Broadcom

Alphabet's supplying its TPU to Meta makes it a good inference competitor to Nvidia for specific workloads and not as a replacement for GPUs.

By 

Fountainhead Investing

Published 

November 25, 2025

Ironwood scores with Meta

Alphabet (GOOG) had some exciting news for the AI industry this morning with a potential order from Meta Platforms (META) for its Ironwood TPUs (Tensor Processing Units), an ASIC designed by Broadcom (AVGO). The news led to a lot attention and a 7% drop in Nvidia's (NVDA) stock price, with analysts questioning Nvidia's future and its business model. However, a lot of this market's reaction is noise and not a trend changer as I wrote earlier. For starters, competitive chips have been expected for a while and Ironwood's chip designer Broadcom has always positioned itself as a major AI chip supplier, with a previous guide to $70–80B revenue by 2030. This order or planned order is a reflection of what Broadcom has been forecasting for a while, and really shouldn't have taken the markets by so much surprise, and in my opinion reflects short-term sentiment, not long-term fundamentals. I believe it is a)Much Ado About Nothing and b) It doesn't change much for Nvidia in the long run.  I own all three companies, so am most excited to see Alphabet and Broadcom succeed as well!

For months, analysts had speculated about which companies might meaningfully challenge Nvidia’s dominance in artificial intelligence compute, with the usual suspects, AMD, Broadcom,  custom ASIC startups, or sovereign-backed chip efforts making the rounds. But the most credible challenger may have been quietly building momentum in plain sight — Google’s TPU, designed and manufactured by Broadcom.

The conversation escalated this morning after theinformation reported that Meta is in advanced discussions to order large volumes of Google’s newest TPU generation, Ironwood (also referred to as TPU 7x). That news alone sparked a rally in Alphabet and Broadcom, while Nvidia fell sharply intraday to 7% before partially recovering to close 2.5% lower. Analysts immediately questioned whether Nvidia’s business model — built on high-cost, general-purpose GPUs — might finally be threatened by cheaper, purpose-built inference silicon.

But that interpretation is too simplistic and doesn't understand the long-term structure of the AI compute market and the role TPUs are designed to play.  I'm confident that instead of looking for Nvidia’s decline, this is the necessary evolution of a maturing, trillion-dollar industry — one where multiple chip architectures coexist, specialize, and expand the total addressable market.

Broadcom’s rise as a foundational AI player

Earlier this year, Broadcom CEO Hock Tan projected that the company could reach $70–80 billion in total revenue by 2030, driven significantly by AI acceleration. I estimated Broadcom’s AI revenue at around $52 billion by fiscal 2027 growing from less than $5Bn, just two years ago.  A substantial part of that growth is tied to Google’s TPU roadmap. For years, Alphabet has and continues to rely on Nvidia GPUs for training its largest foundation models, while simultaneously developing TPUs to support internal AI workloads more efficiently. But the Ironwood generation could represent a strategic turning point, (especially since it has worked on honing Gemini3) — one explicitly optimized for large-scale generative AI inference, arguably the fastest-growing part of the AI compute market.

That Meta, a company with its own custom MTIA accelerator program and massive Nvidia deployments, is exploring a major TPU order demonstrates confidence not just in the chip, but in Broadcom’s ability to design, and Google's to deliver, and scale high-performance AI silicon on hyperscaler timelines.

What spooked investors?

Nvidia losing theoretical future market share, and its profit margins should this gravitate to a price war among hyperscalers, led to a reflexive selloff. Nvidia’s stock has risen so dramatically over the last five years on its market share and massive profit margins and the hint of competition spooked skittish investors.

But the reaction wasn’t about fundamentals — it was about expectations. Investors should never have assumed Nvidia would sustain its astonishing ~80% AI accelerator market share indefinitely.  As AI grows from early-scale adoption to a global compute utility, Nvidia's market share has to come down; Cloud providers prefer multiple suppliers, and enterprises demand price leverage and workload-specific optimization.

So yes, TPUs will gain share — but that doesn’t imply Nvidia loses revenue. The market itself is expanding faster than share shifts can offset.

TPUs vs GPUs: Not substitutes, but complements

TPUs and Nvidia GPUs are architecturally and strategically different.

TPUs are:

  • ASICs built specifically for matrix math used in AI

  • Highly efficient and cost-optimized for inference at scale

  • Best suited for tightly integrated cloud deployments

  • Dependent on host cloud infrastructure and software tooling

GPUs are:

  • General-purpose accelerators

  • Optimized for training and inference, plus simulation, rendering, HPC

  • Supported across every major cloud, OEM, and enterprise data center

  • Powered by CUDA — the most valuable software ecosystem in AI

Google’s Ironwood TPUs can deliver exceptional throughput per dollar for uniform, high-volume workloads like recommendation models, ranking, or LLM inference where pipelines are well understood and stable. Nvidia’s Blackwell generation — including the B100, B200 and GB200 Super chip — is designed for diverse and rapidly evolving AI workloads, from multimodal model training to inference to scientific computing.

Google itself implicitly acknowledges its goals: its TPU revenue goal is roughly 10% of Nvidia’s scale, not parity.

Both have important roles in AI, for raw performance, versatility and flexibility. And flexibility tends to win in environments where models, applications, frameworks, and neural architectures are still changing rapidly.

Nvidia's software moat

Analysts touted Ironwood's chip specs — teraflops, transistor counts, thermal envelopes, which are great, but AI infrastructure decisions are dictated increasingly by software. Nvidia’s CUDA ecosystem is not just a convenient programming model — it is the economic foundation of modern AI deployment with 6-7Mn developers worldwide.

CUDA took more than a decade to develop, scale, and integrate across:

  • compilers

  • libraries

  • inference runtimes

  • developer tooling

  • enterprise support

  • research workflows

  • cloud orchestration

Switching away from CUDA requires rewriting, optimizing, and revalidating years of code, which might be feasible, but time consuming and costly for Google and potentially Meta — but nearly impossible for the vast majority of AI builders.

This is the primary reason Nvidia maintains durable pricing power and continues to capture disproportionate share of AI-related capex.

Does Google's TPU success hurt Nvidia?

In the near term — no. Meta already has massive Nvidia GPU orders for 2025 and 2026. TPU adoption would begin meaningfully in 2027 and later. And even then, TPUs would represent an incremental — not replacement — compute layer.

Most hyperscalers’ long-term AI strategy clearly involves:

  • multiple vendors

  • workload-specific chips

  • negotiating capex efficiency

  • supply-chain resilience

This mirrors Amazon, Microsoft, Tesla, and most sovereign AI efforts. Nvidia will eventually get down to 50% to 60% of hyperscaler spend from 80% — but absolute dollars will still rise.

Google itself implicitly acknowledges this: its TPU revenue goal is roughly 10% of Nvidia’s scale, not parity.

What should investors do?

The market’s reaction reflects fatigue — not fundamentals. Nvidia has been the best-performing large-cap equity of the decade. Any sign of competitive pressure will attract short-term trading flows and profit-taking.

But structurally, nothing material has changed:

  • AI demand continues to accelerate

  • inference and training workloads are both expanding

  • Blackwell remains on track and remains highly competitive

  • CUDA retains overwhelming adoption

  • AI compute TAM keeps compounding

Competition validates — not undermines — Nvidia’s leadership.

Buy, hold or sell?

Today's TPU headlines don’t represent a turning point away from Nvidia, but rather a turning point for the AI industry. The era of single-vendor dominance is naturally giving way to a diversified compute ecosystem, where different architectures serve different purposes. That’s healthy, inevitable, and necessary for AI to achieve global scale.

More importantly, it reinforces confidence in the entire sector — Alphabet, Nvidia, and Broadcom all stand to benefit from rising AI capex, broader deployment models, and workload specialization.

And for long-term investors, the strategy remains unchanged: own all three, like I do.