Nvidia's largest acquisition in history: $640 billion to acquire Groq technology and the father of Google's TPU.

This article is machine translated
Show original

Nvidia has agreed to acquire the core assets and technology license of AI inference chip startup Groq for approximately $20 billion in cash, marking Nvidia's largest single transaction to date. However, both parties emphasized that this deal is not an acquisition of all of Groq's equity, but rather a deal centered on non-exclusive licensing of technology and the integration of senior talent.

Groq subsequently confirmed on its official blog that it had signed a non-exclusive inference technology licensing agreement with Nvidia. Jonathan Ross, founder and CEO of Groq, who previously oversaw the design of Google's TPU chip, along with President Sunny Madra and several key technology executives, will join Nvidia to help advance and expand the technology. Groq itself will continue to operate as an independent company, with former CFO Simon Edwards becoming the new CEO. Its cloud service, GroqCloud, is not included in this transaction and will continue to operate as usual.

Nvidia's largest deal in history: signing a technology licensing agreement with Groq.

The deal is worth approximately $20 billion, valuing Groq at $6.9 billion when it completed its funding round in September. The round was led by Disruptive, with investors including BlackRock, Neuberger Berman, Samsung, Cisco, Altimeter, and 1789 Capital, with Donald Trump Jr. as a partner.

With its cash reserves accumulating rapidly, Nvidia has continued to expand its investment in the AI ​​ecosystem in recent years. As of the end of October, Nvidia had $60.6 billion in cash and short-term investments, far exceeding the $13.3 billion at the beginning of 2023.

In addition to Groq, Nvidia has also invested in AI cloud provider CoreWeave, model company Cohere, and energy and computing infrastructure provider Crusoe in recent years. In September of this year, it announced plans to invest up to $100 billion in OpenAI and $5 billion in Intel to strengthen its overall AI industry chain layout.

Groq confirmed on its official blog that it has signed a non-exclusive inference technology licensing agreement with Nvidia. Founder and CEO Jonathan Ross, President Sunny Madra, and several key technology executives will join Nvidia to help promote and expand the technology. Groq itself will continue to operate as an independent company, with former CFO Simon Edwards becoming the new CEO. Its cloud service, GroqCloud, is not included in this transaction and will continue to operate as usual.

Jensen Huang discusses the Groq case: Integrating low-latency processors to enhance real-time inference capabilities.

CNBC cited an internal email from NVIDIA CEO Jensen Huang to employees, indicating that NVIDIA plans to integrate Groq's low-latency processors into the "NVIDIA AI Factory" architecture to support a wider range of AI inference and real-time workloads.

In his letter, Jensen Huang emphasized, "We are licensing Groq's intellectual property and recruiting top talent, but we are not acquiring Groq as a company." This transaction structure echoes Nvidia's approach in September of this year, when it spent over $900 million to recruit the AI ​​hardware startup Enfabrica team through technology licensing and talent acquisition.

Analysts point out that licensing technology and acquiring talent, rather than full mergers and acquisitions, has become a common strategy for tech giants in an environment of increasingly stringent antitrust regulations. For Nvidia, this move not only allows it to quickly fill the gap in inference technology but also positions it in a key competitive position in the second half of the AI ​​era without triggering regulatory scrutiny.

The AI ​​battlefield is shifting, and ASIC architecture is becoming key.

Founded in 2016 by several former Google engineers, including Jonathan Ross who participated in the design of Google's TPU (Tensor Processing Unit), Groq focuses on an LPU (Language Processing Unit) architecture designed specifically for AI inference. This architecture emphasizes extremely low latency, stable response times, and high energy efficiency, making it particularly suitable for real-time dialogue, voice assistants, finance, and industrial applications.

As AI applications gradually move from model training to large-scale deployment, the industry generally believes that the future growth focus of computing power demand will be on inference rather than training. Compared to the overwhelming advantage of GPUs in the training market, the inference field is facing competition from Google TPUs, dedicated ASICs, and other innovative chips.

This article, titled "Nvidia's Largest Acquisition in History: $640 Billion to Acquire Groq Technology and the Father of Google TPU," first appeared on ABMedia, a ABMedia .

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments