Beth Kindig, lead analyst of the I/O fund released a report on Nvidia (NVDA) $185 this morning, claiming that it could reach a market cap of $20Tr by 2030.
Kindig has been an Nvidia bull for a long time and a very, very successful one at that - probably the most successful, and she never wavered in her conviction. One of the key achievements of the I/O fund is the incredible due diligence - they focus on less than 30 companies and do an amazingly detailed analysis.
Kindig’s valuation is based on a $900Bn sales run rate in 2030, valued at 22x sales to $20Trn. The 900Bn run rate implies a revenue CAGR (Compounded Annual Growth Rate) of 36% from 2025 to 2030. Whew!!! The key aspects of the thesis would probably be familiar to a lot of us in our Whatsapp group and those following Nvidia.I own shares and plan to hold my investments for a minimum 5 year period, and may add more on declines.
These are my key takeaways from the I/O fund’s report:
The massive demand for compute and specifically for Nvidia’s GPUs in the next 5 years is the key reason why Kindig feels Nvidia can reach that valuation.
- Capex spending by the hyperscalers - $400Bn in 2025, with expected increases to $500Bn each year at least through 2030.
- Third-Party Forecasts Aligning or Contrasting with the Thesis as below:
- Grand View Research AI Accelerator Market Growth: Grand View Research projects a ~31.5% CAGR for AI accelerators through 2033 supported by ever-increasing deployment of AI across industries.
- McKinsey’s AI Infrastructure Forecast: McKinsey & Co. estimates about $7 trillion in cumulative AI infrastructure spending by 2030, with roughly $5.2 trillion of that dedicated to building out AI data centers. This implies annual AI CapEx on the order of $1.5 trillion by 2030, a massive addressable market for Nvidia.
- AMD’s Revised TAM Projections: Advanced Micro Devices (AMD) – Nvidia’s chief competitor in accelerators – has also drastically raised its expectations for the AI hardware market. CEO Dr. Lisa Su recently doubled AMD’s 2030 AI TAM estimate to ~$1 trillion, up from a prior $500B by 2028. This new forecast implies roughly 35% annual growth for the industry over the next 5+ years. Notably, 35%+ CAGR is essentially the same growth rate required for Nvidia’s data center business in the run up to a $20T valuation, lending credibility to the idea that such growth is achievable. AMD’s update echoes other experts (e.g. McKinsey) in signaling that AI demand is scaling faster than previously thought.
- UBS Global Spending Forecasts: Investment bank UBS has recently upgraded its estimates for AI-related capital expenditures, reflecting the rapid increases in spending. UBS now forecasts $423B in AI CapEx for 2025 (up from $375B prior) and about $1.3 trillion in annual AI spend by 2030. This 2030 figure equates to a ~25% CAGR in coming years – lower than the ~36% CAGR in Kindig’s Nvidia thesis, but analysts have been consistently revising these figures upward. There is reason to believe forecasts could rise closer to Nvidia’s growth trajectory as new orders and investments are announced. Even at $1.3T, AI CapEx would be only ~1% of global GDP in 2030, whereas past tech booms (railroads, telecom, etc.) reached 1.5–4.5% of GDP. This comparison suggests current AI investment projections might still be conservative, leaving room for further upside in the total market.
- Last but not least, Nvidia’s Comparisons to Current Forecasts: Such growth far outpaces most analysts’ current models. Nvidia’s own visibility into demand is very high – CEO Jensen Huang disclosed the company has ~$500 billion in cumulative orders lined up for its next-gen data center GPUs (Blackwell and Rubin) through 2026. This suggests Nvidia’s data center revenue could approach $320B in FY2027 (vs. ~$270B in prevailing estimates). Yet Wall Street consensus remains conservative, projecting only ~$208B in revenue for FY2026 and ~$290B for FY2027 (combined < $500B). Many analysts (e.g. at Cantor, UBS, Melius, New Street, Wolfe Research) argue these estimates are too low, noting Nvidia may “nearly double” data center revenue in 2026 relative to current forecasts.
-
Under Kindig’s $20T scenario, Nvidia would capture ~60% of this AI spend – a high share, but one that approaches Nvidia’s current ~50% share of AI hardware spending. In other words, if McKinsey’s demand projection is correct, Nvidia “only” needs to maintain dominance to approach the thesis numbers.
Aggressive product roadmap keeping competition at bay:
Rapid GPU Release Cadence (“AI Factory” Roadmap): Nvidia’s offensive product strategy is central to its $20T thesis. The company is moving to a fast 12–18 month cadence for new GPU generations – a dramatic acceleration from prior cycles and much faster than custom-silicon competitors (which iterate on ~3–5 year cycles. This aggressive roadmap aims to turn what was a cyclical product cycle into consistent, compounding growth, ensuring Nvidia’s performance leadership and preventing custom ASICs from catching up.
Key upcoming platforms include:
- Blackwell (2025): Next-gen GPU architecture following 2022’s “Hopper.” yielding ~30×–40× faster AI inference and ~2.5× faster training compared to Hopper. Energy efficiency improves by ~25.
- Vera Rubin (2026–2027): The following architecture (named for Vera Rubin) will scale Nvidia’s AI systems further. A standard Rubin system doubles the GPU count to 144 (from 72 in Blackwell), delivering about 3.3× higher performance. An even larger Rubin Ultra platform is planned, incorporating 576 GPUs in a single system (4× the Rubin base system)
- Feynman (2028): This generation (expected around 2028) is aimed at enabling “giga-scale” AI factories. Feynman systems would power gigawatt-level AI data centers, roughly 8× the scale of today’s largest AI superclusters Feynman-class deployments indicate a huge leap in infrastructure and energy scale for Nvidia-powered AI clouds.
TO ME CUDA IS THE IMPENETRABLE MOAT
CUDA Software Moat & Full-Stack AI Systems: Beyond hardware, Nvidia’s software ecosystem is a critical moat. The proprietary CUDA platform (and related libraries) forms an “impenetrable” developer ecosystem that locks in AI researchers and enterprise customers. Custom AI chips can excel for specific workloads, but they “cannot compete with GPUs and Nvidia’s CUDA software platform on general workloads,” which can run every model and framework with extensive optimization. Nvidia has also evolved into a full-stack AI systems provider – offering not just chips but complete systems (DGX servers), networking (InfiniBand/NVLink), and software frameworks. This end-to-end integration makes Nvidia’s platform highly attractive for scaling AI, and bolsters its infrastructure advantage.
Unmatched Infrastructure Scale: Nvidia’s rapid product cadence and full-stack approach are enabling massive scale-out of AI infrastructure. The size of GPU clusters is growing exponentially: the first 10,000-GPU clusters appeared in 2023; by late 2024, clusters hit 100,000 GPUs, and plans are in place for “hundreds of thousands” of GPUs in single data centers by 2026–27. This trajectory suggests the industry could see 1 million-GPU clusters in the coming years, underpinning the millions of Nvidia GPU shipments required to meet surging cloud demand. Such scaling underscores Nvidia’s unique ability to deliver whole AI factories at extreme scale, reinforcing the $20T thesis by showcasing how Nvidia’s tech roadmap directly converts into larger and larger revenue streams.
Customers and Contracts
- OpenAI’s Mega-Deals (Stargate Project & Azure Commitment): Nvidia is reportedly engaged in a “Stargate” project with OpenAI worth about $500 billion.
- Nvidia and OpenAI also signed a direct partnership to deploy up to 10 GW of Nvidia GPU capacity in OpenAI’s data centers. Under this deal, Nvidia itself will invest up to $100B to supply and stand up that infrastructure, with funds released as each gigawatt of capacity comes online.
- Hyperscalers: Multi-billion dollar orders have been placed by other cloud giants and AI firms, underscoring insatiable demand for Nvidia hardware. For example, firms like Meta, Google, Amazon, and emerging AI startups are all racing to acquire high-end GPUs to build out AI capabilities with deals continue to pop up across the industry.
- Microsoft and Cloud Partners: It’s not just OpenAI – hyperscale cloud providers are also committing enormous capital to Nvidia-powered AI infrastructure:
- Microsoft has reportedly contracted approximately 200,000 of Nvidia’s GB300 GPUs (a configuration of the Blackwell generation) from British AI cloud startup CoreWeave/nScale, in a deal valued around $14 billion
- Ecosystem Partnerships: Nvidia’s strategy also involves partnership deals beyond just selling chips – including co-investments and long-term supply arrangements (as seen with OpenAI). Such strategic partnerships not only bring direct revenue but also strengthen Nvidia’s ecosystem. By being deeply involved in clients’ AI build-outs (e.g. investing in OpenAI’s infrastructure), Nvidia ensures it remains the platform of choice for large-scale AI deployments. These alliances with the likes of OpenAI and Microsoft solidify Nvidia’s role at the center of the AI revolution, making the $20T projection more tangible through concrete, contracted demand.
Yes, all these look great on paper, but it should be clear that Nvidia’s capital is also at stake in several of these strategic partnerships, thereby putting it at greater risk.
A new world
- Unprecedented AI Infrastructure Build-Out: The drive toward AI at scale is creating what may be the largest technology infrastructure build-out in history. Forecasts for upcoming years have been repeatedly revised upward: This arms race in spending is a tailwind for Nvidia, but it also indicates huge industry investment in data centers, networking, and power, reminiscent of past major tech booms.
- Energy Consumption and “AI Factories”: A $20T Nvidia implies a world filled with AI supercomputers, and with that comes immense energy usage and engineering challenges. Nvidia’s 2028-era plans for gigawatt-scale AI data centers (Feynman architecture) point to individual AI facilities requiring on the order of a billion watts (1 GW) of power. That is roughly 8× the power of today’s largest AI installations (the current peak is ~150 MW, with sites like xAI’s Colossus 2 aiming for 300 MW). The broader implication is that AI growth at this pace will demand significant expansions in power infrastructure, cooling capacity, and supply of specialized equipment (like high-bandwidth memory and advanced chips). Energy consumption could become a limiting factor or key consideration as companies deploy ever-larger “AI factories.”
- Beyond Data Centers – New Markets: Notably, the 36% CAGR figure and $20T thesis focus primarily on data center AI, but future AI growth may also come from adjacent markets. The article points out that the scenario “does not factor in” emerging sectors such as robotics, autonomous agents, and simulation] – areas where AI computing demand could surge by 2030. Success in these domains (e.g. AI in factories/warehouses, self-driving systems, virtual world simulation for training AIs) would add even more demand for Nvidia’s hardware and software. This suggests the $20T case might even prove conservative if agentic AI systems and other new applications drive additional revenue streams for Nvidia beyond the cloud data center context.
- Shifting Market Dynamics: The AI computing boom is reshaping the tech industry’s landscape. Enormous spend on AI hardware is elevating companies like Nvidia to unprecedented heights – in effect, AI compute has become the strategic asset for Big Tech’s future. This trend is so powerful that it’s been noted as “displacing the FAANGs of the last decade” as the key driver of market value creation. In other words, whereas consumer internet and mobile companies dominated the 2010s, the 2020s are seeing infrastructure providers like Nvidia take center stage. If Nvidia reaches the projected $20T market cap, it would far exceed any company today, fundamentally altering stock market hierarchies and emphasizing how critical AI has become.
- Outlook – A New Era of Growth: Stepping back, the trajectory toward $20T is underpinned by compounding growth metrics that are already outstripping year-old forecasts. Analysts and industry experts keep revising their predictions higher as real-world data (capex, orders, performance gains) comes in. If Nvidia executes on its aggressive roadmap and AI demand remains on fire, the company has a credible path to $20T in value. What seemed “impossible” just a short while ago now looks increasingly feasible – a testament to the breakneck speed of the AI revolution and Nvidia’s pivotal role within it.
Risks And Challenges
- Who’s got the power? Two quarters back Microsoft had GPUs sitting idle, waiting for power, which is at a massive shortage. Coreweave lost a quarter because a supplier was delayed. Nebius turned away Meta because it did not have enough capacity. The seismic demand coupled with power shortages could become Nvidia’s biggest weakness and impediment.
- Debt: We’ve seen some massive numbers in trillions thrown about - remember these necessitate capital investments and debt. The huge amount of debt capex should worry investors.
- Product obsolescence: Nvidia’s own product superiority with an 18 month cadence renders previous generation GPUs weaker, which brings us to Michael Burry’s question - are CSPs and GPU buyers depreciating their assets properly?
- Show me the money: CSPs, foundational models, and other enterprise software customers will need to show clear revenue or profit benefits within the next 1-2 years…
A bumpy ride: I think the growth is achievable but the trajectory of Nvidia’s stock will be beset by lumpiness, quarterly disappointments and volatility, especially when maybe 25% of the economy is riding on it in 2030, it will always, almost by definition be a bumpy road. I own shares and plan to hold my investments for a minimum 5 year period, and may add more on declines.