Amazon-Anthropic $100B Deal: 5GW of AWS Trainium Compute
April 22, 2026
TL;DR
On April 20, 2026, Amazon and Anthropic announced an expanded strategic collaboration under which Anthropic will spend over $100 billion on AWS over the next ten years and secure up to 5 gigawatts of Trainium capacity to train and serve Claude12. As part of the same deal, Amazon is investing an additional $5 billion in Anthropic immediately, with up to $20 billion more tied to commercial milestones — potentially bringing Amazon's cumulative stake to roughly $33 billion on top of the $8 billion it invested between 2023 and 202434.
The compute ramp is front-loaded. Anthropic says nearly 1 GW of combined Trainium2 and Trainium3 capacity will be online by the end of 2026, with meaningful new Trainium2 capacity arriving in Q2 20261. Claude already runs on more than 1 million Trainium2 chips via Project Rainier, the cluster AWS activated for Anthropic in late 202556. The new agreement extends that pipeline across the Trainium2, Trainium3, and future Trainium4 generations — and bolts a fully integrated Claude console directly into the AWS console so enterprise customers can access Anthropic's full feature set without a separate contract1.
What You'll Learn
- What Amazon and Anthropic actually announced on April 20, 2026, and what "5 GW" translates to in chips
- How Amazon's investment structure works: $5B immediate, up to $20B milestone-based, $33B potential total
- Where the 1 GW of Trainium2+Trainium3 capacity for 2026 lands relative to Project Rainier's current 1M+ chip deployment
- How Trainium3 (144 GB HBM3e, 2.52 FP8 PFLOPs) compares to Trainium2 on paper
- Why Anthropic is running a multi-cloud strategy across AWS, Google TPU, and Nvidia in parallel
- What the deal means for the custom-silicon race against Nvidia
What Was Announced on April 20, 2026
The headline is simple: Anthropic signs up for $100 billion of AWS over a decade, Amazon signs a new check to keep Anthropic close, and the two companies expand the compute envelope underneath Claude12.
The detail matters, because there are three distinct commitments stacked into one press release:
| Element | Detail | Source |
|---|---|---|
| Anthropic's AWS spend | Over $100 billion over the next 10 years | Anthropic press release1 |
| Compute capacity secured | Up to 5 GW across Trainium generations | Anthropic press release1 |
| Near-term deployment | Nearly 1 GW of Trainium2 + Trainium3 capacity by end of 2026; new Trainium2 capacity in Q2 2026 | Anthropic press release1 |
| Amazon's immediate new investment | $5 billion at Anthropic's current $380B post-money valuation | CNBC, TechCrunch37 |
| Amazon's milestone-based additional investment | Up to $20 billion more tied to commercial milestones | CNBC, Bloomberg38 |
| Previous cumulative Amazon investment | $8 billion across September 2023, March 2024, and November 2024 rounds | GeekWire9 |
| Hardware scope | Trainium2, Trainium3, Trainium4, plus Graviton | Anthropic press release17 |
| Console integration | Full Anthropic-native Claude console available inside AWS, with existing AWS billing | Anthropic press release1 |
Five gigawatts is a power envelope, not a chip count. For a rough sense of scale: Anthropic's other major compute supplier, Google (via Broadcom), committed 3.5 GW of next-generation TPU capacity coming online starting in 202710. Meta's single-site Prometheus data center in New Albany, Ohio — one of the largest sites in the industry — is expected to come online at roughly 1 GW in 202611. A 5 GW Trainium footprint across AWS regions, built over a decade, is in the same league as the biggest hyperscaler commitments on the table anywhere.
The Investment Structure
The $25 billion upside headline needs a little unpacking, because it is not a single check.
Amazon is immediately investing $5 billion at Anthropic's current $380 billion post-money valuation — the level Anthropic closed its $30 billion Series G at on February 12, 2026, which was the second-largest venture funding round ever1213. The additional $20 billion is contingent on Anthropic hitting specific commercial milestones that have not been publicly disclosed38.
That converts neatly into a layered picture of Amazon's Anthropic investment history:
| Round | Date | Amount | Cumulative |
|---|---|---|---|
| Initial | September 2023 | $1.25 billion | $1.25B |
| Expansion | March 2024 | $2.75 billion | $4B |
| Expansion | November 2024 | $4 billion | $8B |
| New — immediate | April 20, 2026 | $5 billion | $13B |
| New — milestone-based | TBD | Up to $20 billion | Up to $33B |
All numbers are from Amazon's and Anthropic's own disclosures, aggregated in CNBC, GeekWire, and TechCrunch reporting379. The two companies are careful to describe Amazon as a minority investor; even at the $33 billion cumulative upside, Amazon does not have majority control3.
Project Rainier and the Trainium Ramp
The compute commitment sits on top of an existing deployment that is already operational at unusual scale. Project Rainier, AWS's custom cluster for Anthropic, was announced in late 2024 and went fully online in late 2025 with nearly 500,000 Trainium2 chips in its initial phase614. AWS and Anthropic say Claude is now trained and served on more than 1 million Trainium2 chips, a figure both companies confirmed in their April 20 announcements15.
The April 20 deal extends that pipeline across Trainium generations and caps it at 5 GW. Here is how the Trainium family compares on paper, generation by generation:
| Chip | Process | Per-chip HBM | Per-chip memory bandwidth | Per-chip FP8 compute | Status |
|---|---|---|---|---|---|
| Trainium2 | 5 nm | 96 GiB HBM | 2.9 TB/s | 1.3 PFLOPs dense / 5.2 PFLOPs sparse | Generally available since December 2024 (AWS re:Invent 2024)15 |
| Trainium3 | TSMC N3P (3 nm) | 144 GB HBM3e | 4.9 TB/s | 2.52 PFLOPs dense | Generally available since AWS re:Invent 20251617 |
| Trainium4 | Not disclosed | Not disclosed | 4x Trainium3 | "At least 3x" Trainium3 | On AWS roadmap18 |
AWS's marketing framing for Trainium3 UltraServer configurations — up to 144 Trainium3 chips wired together — is that they deliver 362 FP8 petaflops total with up to 4.4x higher performance, 4x lower training latency, and over 5× higher output tokens per megawatt versus the Trainium2 UltraServer1617. AWS also cited up to 50% cost savings on training workloads versus comparable GPU configurations based on its internal testing19. Those are AWS's own benchmarks on its own silicon, so standard skepticism applies — but the memory-capacity and bandwidth jumps (1.5x and 1.7x respectively) are independently verifiable from the chip specifications20.
Nearly 1 GW of Trainium2 + Trainium3 coming online by the end of 2026 is the concrete near-term deliverable in the new agreement1. The remaining ~4 GW of the 5 GW envelope is planned to arrive across 2027–2029 and will likely skew toward Trainium3 initially and then Trainium4 as that generation enters production118.
Claude Inside the AWS Console
One of the quieter parts of the April 20 announcement is a distribution change. AWS customers will now be able to access the "full Anthropic-native Claude console" from within AWS, using their existing AWS contracts with no additional credentials or billing relationship1. Previously, getting Claude into an AWS workload meant either using Amazon Bedrock's version or signing a separate Anthropic contract to get features like Claude Code, the Projects workspace, or direct API features. The integrated console closes that gap.
The distribution lever matters because Anthropic said in its April 7 revenue disclosure that more than 100,000 customers are already running Claude on Amazon Bedrock21. Folding the full Anthropic console into the AWS experience is how Amazon keeps those customers, and how Anthropic gets native access to the AWS installed base as it scales its enterprise business.
Why Trainium, and Why Now
Anthropic's run-rate revenue crossed $30 billion in April 2026, up from roughly $9 billion at the end of 2025 — a tripling in about four months21. The enterprise tier grew from 500 to over 1,000 customers spending $1 million or more per year in less than two months following the February Series G close2112. Claude Code alone has become one of the single largest drivers of Anthropic's revenue growth, which means the compute bill is driven not only by training but by sustained high-throughput inference21.
In that regime, a multi-year secured compute contract matters more than any one architectural benchmark. Two things specifically make Trainium attractive in this context:
- Cost per useful output. Anthropic and AWS have jointly presented results showing substantial cost-per-token advantages running Claude on Trainium2 versus contemporary GPU alternatives, a trend AWS says will widen with Trainium3's efficiency gains519. That lets Anthropic serve more Claude tokens per dollar of compute than it otherwise could.
- Capacity certainty. Custom silicon built on a hyperscaler's own fab allocations is insulated from the GPU allocation fights that have defined the last two years of AI infrastructure buildouts. Locking in 5 GW of Trainium across a decade is a capacity guarantee that a GPU-only roadmap cannot match.
The same logic is why Anthropic is not single-sourcing. The company explicitly uses a three-lane platform strategy — AWS Trainium, Google Cloud TPUs (1 GW coming online in 2026 under the October 2025 agreement, plus roughly 3.5 GW more starting in 2027 under a new April 2026 Google–Broadcom deal), and Nvidia GPUs — and says this lets it match workloads to the chips that run them best while keeping resilience across suppliers11022.
Where the Deal Fits in Anthropic's Compute Map
With the April 20 announcement, Anthropic now has three contracted compute lanes running in parallel:
| Partner | Commitment | Hardware | Timeline | Source |
|---|---|---|---|---|
| AWS | Up to 5 GW; $100B+ spend | Trainium2, Trainium3, Trainium4, Graviton | Ramping through 2036 | Anthropic–Amazon press release1 |
| Google Cloud + Broadcom | ~3.5 GW additional TPU capacity | Google Cloud TPUs (designed with Broadcom) | Starts 2027, on top of 1 GW coming online in 2026 | Anthropic–Google–Broadcom disclosures1022 |
| Nvidia | Not quantified publicly | Nvidia GPUs across generations | Ongoing | Anthropic statements122 |
Adding those together, Anthropic has well over 8 GW of contracted non-Nvidia compute secured or coming online across the remainder of the decade, not counting Nvidia GPU purchases. That is an enormous envelope for a company that was operating below $10 billion in run-rate revenue at the end of 202521. It is also a strong signal that Anthropic believes Claude demand continues to grow faster than any single supplier can meet alone.
What It Means for the Custom-Silicon Race
Two observations come out of the April 20 deal for anyone watching the broader chip landscape:
First, the gap between "announced" and "deployed" custom silicon is closing fast. As recently as 2024, most hyperscaler custom chip announcements were roadmaps with modest real-world deployments. Project Rainier crossed 1 million Trainium2 chips in production within about a year of activation56. Meta's MTIA roadmap is moving toward 2nm silicon via Broadcom on a similar cadence23. Custom accelerators are no longer side projects; they are majority-share workloads for the customers buying them.
Second, Nvidia still participates — but on different terms. Anthropic continues to run Claude on Nvidia GPUs and Amazon continues to sell Nvidia instances on AWS. What changes is the base-case option. Five or six years ago, "what do we run this on?" almost always answered Nvidia. Today, for a model provider with AWS, Google, and Broadcom partnerships in its back pocket, the answer is "whichever accelerator has capacity and cost advantages for this workload." The April 20 deal is an explicit $100 billion vote that AWS Trainium will be the right answer for large portions of Claude's compute over the next decade1.
The Bottom Line
The April 20 Anthropic-Amazon deal is a compute-supply story much more than an investment story. Amazon's additional $5–$25 billion into Anthropic grabs headlines, but the operative commitment is the other direction: $100 billion of AWS spending over ten years, anchored in up to 5 GW of Trainium capacity, with the first 1 GW of that envelope coming online this year on Trainium2 and the new Trainium3 chips AWS shipped at re:Invent 2025116. For Anthropic, it is a capacity lock-in to keep pace with enterprise Claude demand that has been doubling on a near-monthly basis21. For Amazon, it is proof that custom silicon can anchor a relationship with the AI lab it arguably most needs on its cloud3. And for the rest of the industry, the clearest signal yet that — alongside Meta–Broadcom, Google–Broadcom, and Microsoft's Maia program — the custom-accelerator build is no longer a research project. It is the default way frontier labs plan to run their models.
References
Footnotes
-
Anthropic, "Anthropic and Amazon expand collaboration for up to 5 gigawatts of new compute," press release, April 20, 2026. https://www.anthropic.com/news/anthropic-amazon-compute ↩ ↩2 ↩3 ↩4 ↩5 ↩6 ↩7 ↩8 ↩9 ↩10 ↩11 ↩12 ↩13 ↩14 ↩15 ↩16 ↩17 ↩18 ↩19 ↩20 ↩21
-
Amazon, "Amazon and Anthropic expand strategic collaboration," press release, April 20, 2026. https://www.aboutamazon.com/news/company-news/amazon-invests-additional-5-billion-anthropic-ai ↩ ↩2
-
CNBC, "Amazon to invest up to another $25 billion in Anthropic as part of AI infrastructure deal," April 20, 2026. https://www.cnbc.com/2026/04/20/amazon-invest-up-to-25-billion-in-anthropic-part-of-ai-infrastructure.html ↩ ↩2 ↩3 ↩4 ↩5 ↩6 ↩7 ↩8 ↩9
-
Dataconomy, "Amazon Deepens Anthropic Deal With $25 Billion New Investment," April 21, 2026. https://dataconomy.com/2026/04/21/amazon-invests-5-billion-in-anthropic-raising-total-to-33/ ↩
-
AWS, "AWS activates Project Rainier: One of the world's largest AI compute clusters." https://www.aboutamazon.com/news/aws/aws-project-rainier-ai-trainium-chips-compute-cluster ↩ ↩2 ↩3 ↩4 ↩5
-
Data Center Dynamics, "AWS activates Project Rainier cluster of nearly 500,000 Trainium2 chips," 2025. https://www.datacenterdynamics.com/en/news/aws-activates-project-rainier-cluster-of-nearly-500000-trainium2-chips/ ↩ ↩2 ↩3 ↩4
-
TechCrunch, "Anthropic takes $5B from Amazon and pledges $100B in cloud spending in return," April 20, 2026. https://techcrunch.com/2026/04/20/anthropic-takes-5b-from-amazon-and-pledges-100b-in-cloud-spending-in-return/ ↩ ↩2 ↩3 ↩4
-
Bloomberg, "Amazon Invests Additional $5 Billion in Anthropic to Deepen AI Partnership," April 20, 2026. https://www.bloomberg.com/news/articles/2026-04-20/amazon-to-invest-an-additional-5-billion-in-anthropic ↩ ↩2 ↩3 ↩4
-
GeekWire, "Amazon doubles total Anthropic investment to $8B, deepens AI partnership with Claude maker," November 22, 2024. https://www.geekwire.com/2024/amazon-boosts-total-anthropic-investment-to-8b-deepens-ai-partnership-with-claude-maker/ ↩ ↩2 ↩3
-
Anthropic, "Anthropic expands partnership with Google and Broadcom," April 7, 2026. https://www.anthropic.com/news/google-broadcom-partnership-compute ↩ ↩2 ↩3 ↩4
-
NBC4 Columbus, "Meet Prometheus: World's highest capacity data center slated to open in Ohio in 2026." https://www.nbc4i.com/news/local-news/new-albany/meet-prometheus-worlds-highest-capacity-data-center-slated-to-open-in-ohio-in-2026/ ↩
-
Anthropic, "Anthropic raises $30 billion in Series G funding at $380 billion post-money valuation," February 12, 2026. https://www.anthropic.com/news/anthropic-raises-30-billion-series-g-funding-380-billion-post-money-valuation ↩ ↩2
-
Crunchbase News, "Anthropic Raises $30B At $380B Valuation In Second-Largest Venture Funding Deal Of All Time," February 2026. https://news.crunchbase.com/ai/anthropic-raises-30b-second-largest-deal-all-time/ ↩
-
Data Center Knowledge, "AWS, Anthropic Complete Project Rainier AI Cluster." https://www.datacenterknowledge.com/supercomputers/project-rainer-aws-anthropic-complete-massive-ai-supercomputing-cluster ↩
-
AWS, "Amazon EC2 Trn2 Instances and Trn2 UltraServers for AI/ML training and inference are now available." https://aws.amazon.com/blogs/aws/amazon-ec2-trn2-instances-and-trn2-ultraservers-for-aiml-training-and-inference-is-now-available/ ↩
-
AWS, "Announcing Amazon EC2 Trn3 UltraServers for faster, lower-cost generative AI training," December 2, 2025. https://aws.amazon.com/about-aws/whats-new/2025/12/amazon-ec2-trn3-ultraservers/ ↩ ↩2 ↩3 ↩4 ↩5
-
Data Center Dynamics, "AWS makes Trainium3 UltraServers generally available." https://www.datacenterdynamics.com/en/news/aws-makes-trainium3-ultraservers-generally-available/ ↩ ↩2 ↩3
-
The Next Platform, "With Trainium4, AWS Will Crank Up Everything But The Clocks," December 3, 2025. https://www.nextplatform.com/2025/12/03/with-trainium4-aws-will-crank-up-everything-but-the-clocks/ ↩ ↩2
-
Data Centre Magazine, "AWS Trainium3 Cuts AI Data Centre Power Consumption by 40%." https://datacentremagazine.com/news/trainium3-new-aws-chip-promises-4x-performance-boost ↩ ↩2
-
SemiAnalysis, "AWS Trainium3 Deep Dive — A Potential Challenger Approaching." https://newsletter.semianalysis.com/p/aws-trainium3-deep-dive-a-potential ↩ ↩2
-
NerdLevelTech, "Anthropic $30B ARR: How Claude Overtook OpenAI in Revenue," April 20, 2026. https://nerdleveltech.com/anthropic-30-billion-arr-surpasses-openai ↩ ↩2 ↩3 ↩4 ↩5 ↩6
-
Tom's Hardware, "Broadcom to supply Anthropic with 3.5 gigawatts of Google TPU capacity from 2027 — Claude pioneer says its annual revenue run rate has passed $30 billion." https://www.tomshardware.com/tech-industry/broadcom-expands-anthropic-deal-to-3-5gw-of-google-tpu-capacity-from-2027 ↩ ↩2 ↩3 ↩4
-
NerdLevelTech, "Meta-Broadcom MTIA Deal: 1GW of 2nm Custom AI Silicon," April 21, 2026. https://nerdleveltech.com/meta-broadcom-mtia-deal-1gw-custom-ai-silicon ↩