Nvidia's acquisition of Groq's faster inference LPUs is worth the high price tag of $20Bn, and will yield results like Mellanox.

Nvidia’s acquisition of/licensing deal with Groq
The deal: Prima Facie this appears to be a licensing deal, but the price tag of $20Bn and the hiring of its key people as employees of Nvidia indicates that this is a lot more, and the structuring of the deal suggests that this was done to avoid intense anit-trust or other regulatory scrutiny that would have taken a long time. These are the three key employees: Jonathan Ross (Founder & CEO), the (former Google TPU pioneer) and the brain behind the LPU, which is tailor made for AI inference.. Sunny Madra (President): A key figure in Groq's business and developer expansion, Madra is also transitioning to Nvidia to help scale the licensed technology globally. The "Core Engineering Team": A phalanx of the senior brains-trust and engineers developing the SRAM-based architecture and the software-first compiler responsible for Groq's speed are also moving over to Nvidia. The intensity of AI race, we can see that from the constant jostling between OpenAI, Grok, Gemini, Perplexity, and Anthropic means that companies cannot spend time on regulatory processes required for major acquisitions. Ensuring that the founders and key become Nvidia employees gives Nvidia exactly what it wants: Access to the technology and the engineers who can advance it at lightening speed.
The technology is solid. For Nvidia, a key part of its take no prisoners, market dominating strategy has been offering customers complete or integrated solutions: No where is that more evident than the Mellonox's acquisition for $7Bn for its networking strength, from 2019, which is the glue making the $3Mn NV72 super-chip possible. Nvidia's last quarter's networking revenue was $8.2Bn! So its not just AI chips, but ultra-fast communication chips (developed in its Israeli R&D center), software, and a complete ecosystem. Groq’s inference technology adds another strong weapon in Nvidia's arsenal in deepening its ecosystem and moat. These Language Processing Units, or LPUs, are designed to deliver extremely fast and low-cost inference, which is increasingly becoming the dominant use case for AI.
Expanding market: Nvidia’s GPUs also have strong inference capabilities, and pretty much 40% of its workload is inference. However ASICs (Application Specific Integrated Circuits), such as Alphabet's TPUs (Tensor Processing Units) were designed for specific inference tasks and do those much faster and cheaper than GPUs. In time, Nvidia’s market share will shrink from the 85% it has today to about 60% in the fast expanding market by 2030. Even then, it still should be selling around $550-600Bn worth of GPUs compared to $185Bn today. I strongly believe that ASICS, TCUs , and LPUs will be a serious and integral part of AI inference computing and AI use cases are trending towards that part. Software too will get smarter to squeeze out more computing at a cheaper cost. See this article for more details.
Valuation: The valuation seems to be on the higher side at $20Bn, but I believe Groq’s endgame couldn’t have been anything else besides being acquired by a hyperscaler or a chip market leader. The non Nvidia chip market is around $45Bn, but this increases by 3-4 times in the next 4-5 years so Groq’s LPUs will likely grow to over $4Bn, and more importantly not leave Nvidia’s fold. It will also prevent them from developing a CUDA like operating system, which is really Nvidia’ biggest moat. Additionally, Nvidia gets over 2 million developers now using GroqCloud to run models like Llama 3 and Mistral. And over 100,000 LPUs by early 2026, creating the largest AI inference cluster outside of the major hyperscalers (Google, Amazon, Microsoft).

Nebius' masterly execution as an integrated neocloud player allows it to borrow at very cheap interest rates with very little shareholder dilution.

Nebius is executing brilliantly as an integrated neocloud player with tremendous reach, value addition and strong pricing power.