Meta-Broadcom MTIA Deal: 1GW of 2nm Custom AI Silicon
April 21, 2026
TL;DR
On April 14, 2026, Broadcom and Meta announced an extended strategic partnership through 2029 under which Meta will deploy over 1 gigawatt of custom MTIA (Meta Training and Inference Accelerator) silicon in an initial phase, with a "multi-gigawatt rollout" to follow12. Broadcom described the deal as delivering "the industry's first 2nm AI compute accelerator" built on its XPU platform1. On the same day, Broadcom CEO Hock Tan — who joined Meta's board in February 20243 — said he would not stand for reelection and will transition to an advisor role on Meta's custom silicon roadmap, a governance change reflecting the scale of the new commercial relationship4.
The deal extends Broadcom's dominance in hyperscaler custom accelerators. Broadcom reported $8.4 billion of AI revenue in its Q1 FY2026 (+106% year over year) and guided to $10.7 billion for Q2 FY2026 (+140% YoY)5. Tan has told investors Broadcom now has "line of sight" to AI chip revenue "significantly in excess of" $100 billion in fiscal 20276.
What You'll Learn
- What Meta and Broadcom actually announced on April 14, 2026, and what "1 GW of MTIA" means in practice
- How the four MTIA generations (300, 400, 450, 500) scale, and which one hits 2nm first
- Why Hock Tan stepped off Meta's board on the same day the deal was announced
- How the Broadcom deal fits alongside Meta's separate 6 GW AMD agreement and its ongoing Nvidia spend
- What Broadcom's Q1 AI revenue and $100B-per-year forecast tell us about the custom-silicon wave
What Was Actually Announced
The April 14 announcement is not Meta's first custom-chip disclosure — it is the commercial scaffolding underneath a roadmap Meta already unveiled on March 11, 2026, when it detailed four successive MTIA generations dubbed MTIA 300, MTIA 400, MTIA 450, and MTIA 500, all slated to ship by the end of 2027 on a roughly six-month cadence78.
What is new on April 14 is the depth of the Broadcom commitment:
| Element | Detail | Source |
|---|---|---|
| Partnership horizon | Through 2029 | Broadcom press release1 |
| Initial deployment | Over 1 GW of MTIA silicon | Broadcom press release1 |
| Long-term scale | "Multi-gigawatt rollout" | Broadcom press release1 |
| Process node | Described as "industry's first 2nm AI compute accelerator" | Broadcom press release1 |
| Platform | Broadcom XPU (custom accelerator) platform | Broadcom press release1 |
| Networking | Broadcom advanced Ethernet for scale-up, scale-out, scale-across | Broadcom press release1 |
One gigawatt, in the data-center sense, is a power envelope rather than a chip count, but it is enormous. For comparison, Anthropic's recently expanded Google TPU deal totals roughly 3.5 GW of training and serving capacity coming online in 20279. A single 1 GW facility — the category Meta's Prometheus supercluster in New Albany, Ohio is expected to occupy when it comes online in 2026 — would plausibly consume on the order of hundreds of thousands of accelerators depending on chip, rack density, and overhead10. The point of the April 14 announcement is that Meta now has a contracted silicon pipeline — not just a single chip — to fill capacity of that size with its own design rather than a third-party GPU.
The MTIA Roadmap, Chip by Chip
Meta's March disclosure gave enough specification to compare the four generations directly78:
| Chip | Purpose | Key specs | Status |
|---|---|---|---|
| MTIA 300 | Ranking and recommendations training | 216 GB HBM, 6.12 TB/s bandwidth, 1 compute + 2 network chiplets, 200 GB/s scale-out | In production7 |
| MTIA 400 | Evolves toward GenAI while preserving R&R | 1,200 W TDP, 288 GB HBM, two compute chiplets, 72-accelerator scale-up domain; ~400% higher FP8 FLOPS and 51% higher HBM bandwidth vs. MTIA 300 | Lab-tested, on path to deployment8 |
| MTIA 450 | Intermediate generation | Six-month cadence between 400 and 5007 | Scheduled |
| MTIA 500 | First 2nm modular chiplet design | Targets the "industry's first 2nm AI compute accelerator" billing from the Broadcom release | 2027/2028 target111 |
Two numbers from Meta's own roadmap slide are worth pulling out: from MTIA 300 to MTIA 500, HBM bandwidth rises 4.5x and compute FLOPS rises 25x (comparing MTIA 300's MX8 throughput to MTIA 500's MX4 throughput, i.e. across lower-precision formats)7. That is the generational climb Broadcom is now under contract to co-deliver, from chiplet integration to advanced packaging to Ethernet-based scale-out networking1.
Hock Tan Leaves Meta's Board the Same Day
A deal of this magnitude between a customer and a supplier is hard to govern when the supplier's CEO also sits on the customer's board. Hock Tan, who has been Broadcom's president and CEO since March 200612, joined Meta's board on February 14, 20243. On April 14, 2026, he informed Meta he would not stand for reelection, moving instead to an advisor role focused specifically on Meta's custom silicon roadmap4.
Neither company framed the move as compelled by a specific conflict — the Broadcom release called it a transition "given the scale of this expanded partnership"1 — but the optics required it. With a multi-year, multi-generation chip commitment running through 2029, a Broadcom CEO on Meta's board would be voting on AI infrastructure decisions that could directly affect Broadcom's largest customer relationship.
Where the MTIA Deal Fits in Meta's Broader Spend
This is one of three major silicon relationships Meta is simultaneously running, and the April 14 deal explicitly does not replace Nvidia. The map looks roughly like this:
| Vendor | Commitment | Signed | Source |
|---|---|---|---|
| Broadcom (MTIA) | >1 GW initial; multi-GW through 2029 | April 14, 2026 | Broadcom/Meta12 |
| AMD (Instinct) | Up to 6 GW of custom MI450 and successors, estimated ~$60B over five years, with warrants for 160M AMD shares tied to delivery | February 24, 2026 | AMD press release13 |
| Nvidia | Ongoing large-scale GPU purchases across its data-center fleet | Continuing | Analyst reporting14 |
On top of that, Meta has guided 2026 capital expenditure to a range of $115–$135 billion (up from $72.2 billion in 2025) as it pushes data-center capacity from today's footprint toward the 5 GW Hyperion cluster in Louisiana and the 1 GW Prometheus supercluster in Ohio1015. The pattern is deliberately multi-vendor: Meta gets pricing and supply leverage by running custom Broadcom silicon, custom AMD GPUs, and Nvidia GPUs in parallel rather than betting the farm on any one of them.
What This Says About Broadcom's AI Business
For Broadcom, the April 14 announcement is one more brick in an AI revenue wall that has been growing faster than almost anything else in the semiconductor sector:
- Q1 FY2026: $8.4 billion in AI revenue, +106% year over year, ~43% of total quarterly revenue of $19.31 billion5.
- Q2 FY2026 guide: $10.7 billion in AI revenue, +140% year over year5.
- Multi-year outlook: Tan told investors on the March 4 earnings call that Broadcom has "line of sight" to AI chip revenue "significantly in excess of" $100 billion in fiscal 20276.
The customer list making those numbers possible is narrow but marquee. Broadcom is the design and networking partner for Google's Ironwood TPU v7, with coverage reportedly extending through 203116; it co-develops MTIA with Meta; and it serves additional hyperscaler customers whose custom silicon it builds on its XPU platform. The April 14 Meta deal is notable less because it is a new customer and more because it quantifies the pipeline: one customer, multiple gigawatts, through 2029, at a process node Broadcom is first to commercialize for AI accelerators.
What It Means for Developers and Builders
If you work on AI applications rather than on data-center procurement, the direct impact is indirect but real:
- More inference capacity, more often priced competitively. Every gigawatt of custom silicon Meta operates is a gigawatt Meta does not need to buy on Nvidia's price curve. To the extent that pressure flows into what Meta charges for its own products (Llama hosting, ads inference, AI features in WhatsApp and Instagram), it eases the cost floor for builders who consume those services2.
- An open model pipeline that outlasts a single GPU generation. Meta has been the most aggressive open-weights publisher at the frontier; multi-gigawatt MTIA capacity is the training and serving substrate that lets it keep doing that without being capacity-gated by any one vendor17.
- The "custom chip" becomes the default, not the experiment. Between Google's TPUs, Amazon's Trainium line, Microsoft's Maia, and now Meta's MTIA on Broadcom 2nm, every major hyperscaler now has a multi-year, multi-generation roadmap for silicon it designs itself. For enterprise buyers, that changes the competitive landscape more than any one model release.
The Bottom Line
The April 14 Meta-Broadcom announcement is a pricing and supply-chain story dressed as a press release. By locking in >1 GW of custom MTIA silicon now and multiple gigawatts through 2029 — on a process node Broadcom says it is first to ship for AI — Meta secures a path to train and serve its next generation of models without being price-constrained by any single GPU supplier. For Broadcom, it further cements a business line that grew 106% in a single quarter and whose CEO now talks about $100 billion a year as a line of sight, not a stretch goal. For the rest of the industry, it confirms a pattern every hyperscaler is now running in parallel: buy Nvidia, buy AMD, and build your own.
References
Footnotes
-
Broadcom, "Broadcom Announces Extended Partnership with Meta to Deploy Technology to Support Multi-Gigawatts of Meta's Custom Silicon, MTIA," press release, April 14, 2026. https://investors.broadcom.com/news-releases/news-release-details/broadcom-announces-extended-partnership-meta-deploy-technology ↩ ↩2 ↩3 ↩4 ↩5 ↩6 ↩7 ↩8 ↩9 ↩10 ↩11 ↩12 ↩13 ↩14 ↩15 ↩16
-
Meta Newsroom, "Meta Partners With Broadcom to Co-Develop Custom AI Silicon," April 14, 2026. https://about.fb.com/news/2026/04/meta-partners-with-broadcom-to-co-develop-custom-ai-silicon/ ↩ ↩2 ↩3 ↩4
-
Meta Newsroom, "Hock E. Tan and John Arnold to Join Meta Board of Directors," February 14, 2024. https://about.fb.com/news/2024/02/hock-tan-and-john-arnold-join-meta-board-of-directors/ ↩ ↩2
-
Bloomberg, "Meta, Broadcom Deepen Ties on Chips; Tan Departs Meta Board," April 14, 2026. https://www.bloomberg.com/news/articles/2026-04-14/meta-broadcom-deepen-ties-on-chips-tan-departs-meta-s-board ↩ ↩2 ↩3 ↩4
-
Broadcom Q1 FY2026 earnings reporting, summarized by FinancialContent, "Broadcom's AI Revenue Rockets 106% to $8.4 Billion as Custom Silicon Dominates the Infrastructure Build-Out," March 24, 2026. https://markets.financialcontent.com/stocks/article/marketminute-2026-3-24-broadcoms-ai-revenue-rockets-106-to-84-billion-as-custom-silicon-dominates-the-infrastructure-build-out ↩ ↩2 ↩3 ↩4
-
CNBC, "Broadcom CEO Hock Tan sees AI chip revenue 'significantly' above $100 billion next year," March 4, 2026. https://www.cnbc.com/2026/03/04/broadcom-sees-ai-chip-sales-significantly-over-100-billion-in-2027.html ↩ ↩2 ↩3
-
Meta AI, "Four MTIA Chips in Two Years: Scaling AI Experiences for Billions," March 11, 2026. https://ai.meta.com/blog/meta-mtia-scale-ai-chips-for-billions/ ↩ ↩2 ↩3 ↩4 ↩5
-
Tom's Hardware, "Meta reveals four new MTIA chips built for AI inference — to be released on a six-month cadence," March 2026. https://www.tomshardware.com/tech-industry/semiconductors/meta-reveals-four-new-mtia-chips-built-for-ai-inference ↩ ↩2 ↩3
-
Anthropic / Google reporting on the 3.5 GW TPU capacity expansion, April 2026 disclosures summarized in our earlier coverage of Anthropic's $30B ARR milestone. https://nerdleveltech.com/anthropic-30-billion-arr-surpasses-openai ↩
-
NBC4 Columbus, "Meet Prometheus: World's highest capacity data center slated to open in Ohio in 2026." https://www.nbc4i.com/news/local-news/new-albany/meet-prometheus-worlds-highest-capacity-data-center-slated-to-open-in-ohio-in-2026/ ↩ ↩2
-
Oplexa, "Meta Broadcom AI Chip Deal 2026: 1GW MTIA, 2nm," April 15, 2026. https://oplexa.com/meta-broadcom-ai-chip-deal-2026/ ↩ ↩2
-
Broadcom executive biography for Hock E. Tan. https://www.broadcom.com/company/leadership/hock-tan ↩
-
AMD, "AMD and Meta Announce Expanded Strategic Partnership to Deploy 6 Gigawatts of AMD GPUs," press release, February 24, 2026. https://www.amd.com/en/newsroom/press-releases/2026-2-24-amd-and-meta-announce-expanded-strategic-partnersh.html ↩ ↩2
-
Benzinga, "The Meta-Broadcom AI Chip Deal: A Shift From Nvidia Dependence, Not Displacement," April 2026. https://www.benzinga.com/Opinion/26/04/51918865/the-meta-broadcom-ai-chip-deal-a-shift-from-nvidia-dependence-not-displacement ↩
-
Data Center Dynamics, "Meta estimates 2026 capex to be between $115-135bn, as data center spend grows," 2026. https://www.datacenterdynamics.com/en/news/meta-estimates-2026-capex-to-be-between-115-135bn/ ↩
-
Reporting on Broadcom's Google TPU design partnership, including Trendforce, "Broadcom Reportedly Eyes $100B AI Chip Revenue in 2027, Backed by 6 Key Clients Like Google," March 5, 2026. https://www.trendforce.com/news/2026/03/05/news-broadcom-reportedly-eyes-100b-ai-chip-revenue-in-2027-backed-by-six-key-clients-including-google-meta/ ↩ ↩2
-
Our earlier coverage of the custom AI chip race and the hyperscaler vs. Nvidia dynamic. https://nerdleveltech.com/the-custom-ai-chip-race-2026-meta-google-amazon-microsoft-vs-nvidia ↩