How 20B Non-Acquisition Signals the New Shape of AI Consolidation
December 28, 2025
On Christmas Eve 2025, Nvidia announced a deal that didn't look like a traditional acquisition—because it wasn't one. For approximately $20 billion in cash, the chip giant secured a non-exclusive license to Groq's inference technology and brought Groq's founder Jonathan Ross, president Sunny Madra, and roughly 90% of Groq's employees into Nvidia. Groq, the AI chip startup, remains nominally independent under a new CEO.
This wasn't an acquisition. It was a capability transfer with no clean change-of-control.
According to Groq's official announcement, the companies entered into a "non-exclusive licensing agreement" for Groq's inference technology. Jonathan Ross and Sunny Madra are moving to Nvidia to help "advance and scale the licensed technology." Meanwhile, Groq stays alive—GroqCloud continues operations, and finance chief Simon Edwards steps in as CEO.
CNBC reports the deal at roughly $20 billion, making it by far Nvidia's largest transaction ever—nearly three times larger than its 2019 acquisition of Mellanox ($7 billion). For context, Bloomberg reported Groq's valuation at $6.9 billion following a $750 million funding round in September 2025. If the $20 billion figure is accurate, the deal represents nearly a 3x jump in just three months.
Here's the part that makes this more than a chip story: Jonathan Ross designed Google's Tensor Processing Unit (TPU), the custom chip that powers Google's entire AI infrastructure. According to his LinkedIn and multiple sources, Ross began the TPU project as a Google "20% project" in 2013, then designed, verified, built, and deployed the first TPU across Google's data centers in just 15 months. Nvidia just brought the architect of their biggest competitor's silicon into their own organization—and they did it through a structure that sidesteps the regulatory review a traditional acquisition would have triggered.
Ross and Madra are the asset. That's what Nvidia paid for.
This Is the New Deal Shape in Frontier AI
Once you see it, you see it everywhere. Nvidia's Groq move isn't an outlier—it's part of a clear pattern:
-
Microsoft paid Inflection AI roughly $650 million in licensing fees while hiring co-founders Mustafa Suleyman and Karen Simonyan, along with most of the 70-person team, in March 2024. Inflection remained an independent company.
-
Google paid Character.AI approximately $2.7 billion for a non-exclusive license in August 2024 while hiring co-founders Noam Shazeer and Daniel De Freitas—both former Google employees who had left when the company refused to release their chatbot.
The structure is identical: license the capability, hire the brain trust, avoid the acquisition.
As Bernstein analyst Stacy Rasgon noted: "Antitrust would seem to be the primary risk here, though structuring the deal as a non-exclusive license may keep the fiction of competition alive."
Bloomberg reported that Microsoft's Inflection deal included $620 million for licensing AI models and around $30 million for waiving legal rights related to mass hiring. The UK's Competition and Markets Authority later designated it a "merger" despite the unconventional structure, signaling regulators are catching on—but also that they're not yet blocking these deals.
Big Tech wants the people and the rights—not the cap table, not the liabilities, not the regulatory review, not the integration work.
What Happened: The Deal Structure
Here's what we know, based on official statements and reporting from CNBC, TechCrunch, and Axios:
The Money
- $20 billion in cash (per Alex Davis, CEO of Disruptive, Groq's lead investor)
- Neither Nvidia nor Groq officially confirmed the price
- Axios reports most Groq shareholders will receive per-share distributions tied to the $20B valuation: ~85% paid upfront, 10% mid-2026, and the remainder at end of 2026
The People
- Jonathan Ross (Groq founder and CEO) joins Nvidia
- Sunny Madra (Groq president) joins Nvidia
- Approximately 90% of Groq employees are joining Nvidia
- Employees are paid cash for all vested shares; unvested shares convert to Nvidia stock vesting on a schedule
- Simon Edwards (Groq CFO) becomes CEO of the remaining Groq entity
The Technology
- Nvidia gets a non-exclusive license to Groq's inference technology
- Groq's Language Processing Unit (LPU) intellectual property transfers to Nvidia
- GroqCloud, Groq's inference API service, continues operating
What Nvidia Gets
According to Nvidia CEO Jensen Huang: "We plan to integrate Groq's low-latency processors into the NVIDIA AI factory architecture, extending the platform to serve an even broader range of AI inference and real-time workloads."
What Groq Retains
- The corporate entity remains independent
- GroqCloud API service continues operations
- The non-exclusive license means Groq can still license its technology to others (in theory)
Who's Involved: Why Jonathan Ross Specifically Matters
If you don't understand who Jonathan Ross is, this deal looks random. It's not.
The Google TPU Story
In September 2013, Ross transitioned from Software Engineer to Hardware Engineer at Google and launched what became the Tensor Processing Unit as a "20 percent project"—Google's policy allowing engineers one day per week for self-directed work.
Ross and his team designed, verified, built, and deployed the TPU in just 15 months, with production deployment by early 2015. According to Wikipedia, three separate groups at Google were developing AI accelerators; the TPU, a systolic array design, was the one selected.
The TPU became foundational to Google's AI infrastructure. Today, it powers more than 50% of Google's compute workloads and is the backbone of Google Cloud's AI services.
Why This Matters to Nvidia
Google's TPU is Nvidia's most significant competitive threat in AI infrastructure. While Nvidia dominates training workloads with its GPUs, Google has proven that custom ASICs (Application-Specific Integrated Circuits) can match or exceed GPU performance for specific AI tasks—especially inference.
- Deep knowledge of Google's silicon strategy from the person who created it
- Expertise in custom ASIC design for AI workloads
- Credibility in inference-specific chip architecture, an area where Nvidia's GPU-centric approach has limitations
As The Decoder noted: "Nvidia's $20 billion Groq deal is really about blocking Google's TPU momentum."
Groq's LPU: A Different Approach
After leaving Google in 2015, Ross founded Groq in 2016 with a team of former Google engineers. Their goal: build a chip purpose-built for inference, not training.
The result was the Language Processing Unit (LPU), which takes a fundamentally different architectural approach than GPUs:
The Memory Wall Problem
GPUs rely on HBM (High Bandwidth Memory), which sits outside the processing core. According to Groq's technical blog, every time a GPU needs to generate a token (word), it must "fetch" model weights from external memory. This creates a "memory wall" where the processor sits idle, waiting for data.
Result: GPUs often run at 30-40% utilization during inference.
Groq's Solution: On-Chip SRAM
The LPU integrates 230 MB of on-chip SRAM with 80 TB/s bandwidth as primary weight storage (not cache). According to Groq, this eliminates the memory wall bottleneck.
Combined with deterministic, clockwork execution and static scheduling, the LPU achieves:
- Nearly 100% compute utilization (versus 30-40% for GPUs during inference)
- Text generation speeds exceeding 1,600 tokens per second
- 10x less power consumption than GPUs for inference workloads
Why This Matters
Inference is where 90% of AI compute happens in production. Training a model is a one-time cost; serving that model to millions of users is ongoing. The company that dominates inference economics wins the AI infrastructure game.
Nvidia's GPU architecture was optimized for training. Groq's LPU was optimized for inference. By licensing Groq's IP and hiring Ross, Nvidia is hedging against the risk that GPU-based inference becomes economically uncompetitive as models scale and deployment costs compound.
Why It Matters: Three Bottlenecks That Make This Deal Rational
Without understanding the bottlenecks, this deal looks like Nvidia overpaying. With context, it's strategic defense.
Bottleneck 1: Inference Economics
The Problem: As AI models grow larger and more complex, the cost of serving them at scale becomes prohibitive. GPUs are excellent for training but inefficient for inference.
The Numbers: Groq's LPU architecture claims 10x better power efficiency and near-100% utilization versus GPUs. If true, this represents a massive cost advantage in production deployments.
What Nvidia Gains: By acquiring Groq's inference IP, Nvidia can offer customers a broader range of deployment options—GPUs for training, LPU-derived chips for inference. This protects Nvidia's market position as inference economics become a competitive battleground.
Bottleneck 2: Memory and Packaging Supply
The Problem: Advanced chip packaging (CoWoS, HBM3) is in severe shortage. TSMC's capacity is booked years in advance. Any architecture that reduces dependency on exotic packaging has strategic value.
Groq's Advantage: The LPU's on-chip SRAM approach reduces reliance on HBM, potentially easing supply chain constraints.
What Nvidia Gains: Diversification away from HBM-dependent architectures gives Nvidia supply chain optionality during a period of unprecedented demand.
Bottleneck 3: The Talent Pool
The Problem: There are very few people on Earth who can design custom AI accelerators at scale. Ross designed the TPU. The Groq team built the LPU. This expertise is irreplaceable.
What Nvidia Gains: By hiring Ross, Madra, and 90% of Groq's team, Nvidia prevents these individuals from joining competitors (Google, Meta, Amazon) or building another startup. In talent-constrained fields, removing competitors' access to scarce talent is as valuable as acquiring the talent yourself.
As Yahoo Finance reported: "Nvidia's Groq deal underscores how the AI chip giant uses its massive balance sheet to 'maintain dominance.'"
What to Watch: Regulatory Scrutiny and the Future of "Pseudo-Acquisitions"
Antitrust Implications
The "license + acquihire" structure is designed to avoid traditional M&A review, but regulators are catching on:
- The UK's CMA designated Microsoft's Inflection deal a "merger" despite the licensing structure
- The DOJ is reportedly investigating Google's Character.AI deal for potential antitrust violations
- CNBC quoted analyst Stacy Rasgon: "Structuring the deal as a non-exclusive license may keep the fiction of competition alive"
Expect the FTC and DOJ to scrutinize the Nvidia-Groq deal closely. If regulators decide this structure is an end-run around merger review, we may see new guidance or enforcement actions.
What This Means for Startups
If you're building an AI infrastructure company, the "license + acquihire" model changes what exit means:
Good news:
- Valuations can be much higher than traditional acquisitions (Groq: $6.9B valuation → $20B deal)
- Deals close faster with less regulatory risk
- Shareholders get cash without the company being absorbed
Bad news:
- Equity outcomes are no longer automatic—you need leverage (scarce talent, critical IP)
- The company may continue as a "zombie entity" with minimal staff
- Employees moving to the acquirer may see unvested equity converted to acquirer stock with new vesting schedules
According to Axios, Groq employees joining Nvidia will have their unvested shares paid out at the $20B valuation via Nvidia stock that vests on a schedule—meaning they're subject to Nvidia's future stock performance and vesting cliffs.
Financing Structures to Watch
Nvidia spent $20 billion cash—one-third of its $60 billion cash pile. This sets a precedent for how much Big Tech is willing to spend to deny competitors access to scarce talent and IP.
Watch for:
- More non-exclusive licensing deals with talent transfers
- Increased use of earnouts tied to technology milestones rather than revenue
- Startups structuring cap tables to maximize leverage in pseudo-acquisitions
The Employee Conversation
If you work at an AI startup, ask:
- What happens to unvested equity? (Cash? Acquirer stock? New vesting schedule?)
- Is the license exclusive or non-exclusive? (Does the startup actually survive, or is it a shell?)
- Who's moving, and who's staying? (If 90% of the team leaves, the "independent company" is a fiction)
- What are the earnout terms? (Upfront cash vs. deferred payments tied to milestones)
The Groq deal shows these deals can be lucrative for shareholders and employees—but the terms matter enormously.
The Bottom Line
The Nvidia-Groq deal isn't just a $20 billion transaction. It's a strategic playbook for how Big Tech consolidates AI capabilities without triggering antitrust review:
- License the IP (non-exclusively, to maintain the appearance of competition)
- Hire the brain trust (founders, executives, 90% of the team)
- Leave the corporate shell alive (with a skeleton crew and continued operations)
- Pay a premium over last valuation (to overcome shareholder objections)
- Structure payouts over time (to retain key employees and manage risk)
For Nvidia, this deal is about blocking Google's TPU momentum and securing the talent and IP to compete in inference—the workload that will dominate AI infrastructure spending over the next decade.
For the AI industry, this deal confirms that talent is the real asset, not cap tables. If you have the people who can ship, you have leverage. If you don't, your valuation is theoretical.
For regulators, this is a test case. If the FTC and DOJ allow this structure to stand, expect every Big Tech company to use it. If they push back, we may see new guidance on what constitutes a "merger" in the age of AI.
The stakes are clear: whoever controls inference infrastructure controls the economics of AI deployment. And Nvidia just spent $20 billion to make sure that's them.
Sources
- Groq Official Announcement: Non-Exclusive Licensing Agreement
- CNBC: Nvidia buying AI chip startup Groq for $20 billion
- CNBC: Nvidia-Groq deal structured to keep 'fiction of competition alive'
- TechCrunch: Nvidia to license AI chip challenger Groq's tech and hire its CEO
- Bloomberg: Groq Raises $750 Million at $6.9 Billion Valuation
- Axios: Nvidia deal a big win for Groq employees and investors
- The Decoder: Nvidia's $20 billion Groq deal is really about blocking Google's TPU momentum
- Groq Blog: Inside the LPU - Deconstructing Groq's Speed
- Groq Blog: What is a Language Processing Unit?
- The Information: Microsoft Pays Inflection $650 Million
- Bloomberg: Microsoft to Pay Inflection AI $650 Million
- TechCrunch: Character.AI CEO Noam Shazeer returns to Google
- TechCrunch: UK regulator greenlights Microsoft's Inflection acquihire
- Yahoo Finance: Nvidia's Groq deal underscores how the AI chip giant uses its massive balance sheet
Published on NerdLevelTech.com | News Category | December 28, 2025