Cerebras IPO 2026: The $25B Nvidia Challenger
April 17, 2026
TL;DR
Cerebras Systems is targeting a Nasdaq IPO (ticker: CBRS) in Q2 2026 at a valuation of $22–25 billion, seeking to raise approximately $2 billion. The company makes the world's largest AI chip — the Wafer-Scale Engine 3 (WSE-3), physically approximately 57x larger than Nvidia's H100 — and in January 2026 secured a $10 billion compute contract with OpenAI, one of the largest AI infrastructure deals on record. What makes Cerebras unusual is that, unlike its closest rivals, it is pursuing an independent public listing rather than being absorbed by Nvidia or another hyperscaler.
What You'll Learn
- What Cerebras actually makes and why the wafer-scale design is radically different from GPUs
- The details of the $10 billion OpenAI contract and what it signals for Cerebras's position in AI infrastructure
- How the CFIUS national security review nearly derailed the IPO and what G42's involvement meant
- Cerebras's financial profile: revenue growth, customer concentration risk, and path to profitability
- How the competitive landscape shifted after Nvidia's $20B Groq deal, and where Cerebras stands
- What the IPO timeline looks like and what investors are weighing
What Is Cerebras Systems?
Cerebras Systems was founded in Los Altos, California in 2016 by Andrew Feldman and a team of semiconductor engineers including Gary Lauterbach, Michael James, Sean Lie, and Jean-Philippe Fricker. Feldman had previously co-founded SeaMicro, a low-power server startup that AMD acquired in 2012 for $334 million1.
The founding premise was that the standard approach to AI computing — using clusters of GPUs connected by high-speed networking — introduces fundamental inefficiencies: data must move back and forth between chips at enormous scale, and the latency from inter-chip communication limits throughput. Cerebras's solution was to build a single chip large enough to hold an entire AI model's active compute and memory on one die. That chip is the Wafer-Scale Engine.
The Wafer-Scale Engine: One Chip to Run Them All
Conventional AI accelerators like Nvidia's H100 are cut from a silicon wafer into individual dies — each H100 has a die area of approximately 814 mm². The Cerebras WSE-3 takes the opposite approach: it uses an entire TSMC 5nm wafer as a single chip.
The result is a die measuring 46,225 mm² — approximately 57x the area of an H1002. That scale unlocks capabilities that no multi-chip system can replicate at equivalent efficiency:
| Specification | Cerebras WSE-3 | Nvidia H100 |
|---|---|---|
| Die area | 46,225 mm² | ~814 mm² |
| AI-optimized cores | 900,000 | 16,896 CUDA cores (SXM5) |
| Transistors | 4 trillion | 80 billion |
| On-chip SRAM | 44 GB | 50 MB |
| Peak AI performance | 125 PetaFLOPS (sparse FP16) | ~3.9 PetaFLOPS (FP8) |
| Process node | TSMC 5nm | TSMC 4nm |
Sources: ServeTheHome, Tom's Hardware, Tweaktown3
The critical advantage is memory bandwidth and latency. Because the WSE-3 holds 44 GB of on-chip SRAM directly accessible to 900,000 cores — rather than routing through HBM stacks connected via PCIe or NVLink — it eliminates the memory bottleneck that limits GPU-based inference at scale. Tom's Hardware estimated that a single WSE-3 delivers the theoretical equivalent of approximately 62 H100 GPUs4.
For AI inference specifically (running a trained model to generate outputs), this architecture delivers lower latency per request than GPU clusters of equivalent total FLOPS — which is exactly what OpenAI cares about when serving ChatGPT to millions of users.
The OpenAI Deal: $10 Billion for AI Inference
On January 14, 2026, Cerebras announced a multi-year compute contract with OpenAI worth over $10 billion, structured to deliver 750 megawatts of AI inference capacity through 20285.
OpenAI will use Cerebras infrastructure specifically for inference workloads — the compute-intensive work of running trained models to answer user queries, including ChatGPT and reasoning-focused models. Sachin Katti, OpenAI's Head of Industrial Compute, described the arrangement as adding "a dedicated low-latency inference solution to our platform. That means faster responses, more natural interactions, and a stronger foundation to scale real-time AI to many more people."6
The deal's scale is significant. OpenAI has historically relied overwhelmingly on Nvidia GPUs and Microsoft Azure infrastructure. A $10 billion, 750-megawatt commitment to a single alternative compute provider signals that Cerebras's architecture delivers a meaningful advantage for inference tasks that justifies the infrastructure complexity of adding a non-Nvidia supply chain.
The contract also provides Cerebras with a long-term revenue anchor that supports the IPO narrative: unlike many chip startups, which pitch future customers and future performance, Cerebras enters its public markets debut with a signed, multi-year contract from the world's highest-profile AI company.
The CFIUS Detour: A National Security Speed Bump
The road to Cerebras's IPO was significantly complicated by a foreign investment review that stretched across multiple years.
In 2023, Cerebras raised $335 million from G42, an Abu Dhabi-based AI company. G42 took voting shares in Cerebras. That structure triggered a review under the Committee on Foreign Investment in the United States (CFIUS), which scrutinizes foreign investments in U.S. companies with potential national security implications.
The concern: G42 had historical ties to Huawei and Chinese entities. An AI chip company with potential defense applications receiving substantial capital — and voting rights — from a company connected to a restricted Chinese technology vendor raised flags7.
Cerebras originally filed a voluntary CFIUS notice in 2024 alongside G42. After months without clearance, the company withdrew the filing in September 2024, converting G42's voting shares to non-voting shares and arguing that CFIUS lacked jurisdiction over a non-voting share sale. Cerebras also filed its original S-1 on Nasdaq in September 2024, targeting a public offering — but withdrew it on October 3, 2025, as the review continued.
The CFIUS clearance finally came on March 31, 20258. By early 2026, G42 no longer appears among Cerebras's investor list in the new S-1 documentation, suggesting the relationship was fully restructured as part of the resolution process.
The delayed IPO turned out to be a pivot point: in the months between the original S-1 withdrawal and the new filing, Cerebras landed the $10 billion OpenAI contract, effectively transforming its investor story.
Revenue and Financial Profile
Cerebras's revenue trajectory is steep, though the company has not yet reached profitability. Based on the original S-1 data — the most recent public financial information available before the new filing:
| Period | Revenue | YoY Growth |
|---|---|---|
| Full year 2023 | $78.7 million | — |
| TTM to June 30, 2024 | $206.5 million | ~162% |
The TTM figure represents approximately 162% growth over FY2023. Full-year 2024 revenue has not been publicly disclosed in an official filing — financial aggregators project higher growth based on H1 2024's run rate, but those estimates are not from official sources9. Gross margins have improved significantly. Net margins as of H1 2024 were approximately -48%, improving from well below -100% in prior years.
Full-year 2025 financials have not yet been publicly disclosed in the new S-1. The $10 billion OpenAI contract was signed in January 2026, so its revenue contribution will appear in 2026 forward.
Customer concentration risk. This is the most significant caution flag in Cerebras's financial story. In 2023, G42 accounted for 83% of Cerebras's revenue; in H1 2024, the figure was 87%10. That level of concentration in a single customer — especially one now absent from the cap table — is unusual even by early-stage hardware startup standards. The OpenAI deal provides a new anchor customer, but investors will scrutinize whether Cerebras can build a diversified revenue base.
The Competitive Landscape Just Got Simpler (and Stranger)
Cerebras entered 2025 with a crowded set of AI chip competitors, including Groq, SambaNova, Graphcore, and several others. The competitive map has changed substantially:
Groq — Cerebras's closest architectural rival, known for its Language Processing Unit (LPU) and fast inference claims — was the subject of a $20 billion licensing and asset deal with Nvidia, announced December 24, 202511. Groq continues as an independent company under new leadership, but the deal effectively removed the most comparable independent AI inference chip company from the public market conversation. Paradoxically, this may have accelerated Cerebras's IPO timing: as the leading pure-play AI inference chip company still pursuing independence, Cerebras now faces a cleaner runway.
SambaNova has raised $350 million and unveiled its SN50 chip on February 24, 2026, claiming 5x more compute per accelerator versus its prior-generation SN40L chip12. SambaNova remains private and is reportedly in acquisition discussions with Intel, though those talks have reportedly stalled.
Positron raised $230 million in a Series B announced February 4, 2026, with its Atlas system claiming 3x lower end-to-end latency versus H100 systems for trading inference workloads13. Positron is an early-stage company without disclosed revenue.
For a broader view of how hyperscalers are betting on custom silicon and AI infrastructure, see our coverage of the $700 billion AI infrastructure race and Huawei's Ascend 950PR AI chip challenge to Nvidia.
IPO Timeline and Investor Considerations
Cerebras raised a $1 billion Series H in February 2026 at a $23 billion post-money valuation, led by Tiger Global with participation from AMD, Fidelity, Benchmark Capital, Coatue, and Altimeter14. Total capital raised stands at approximately $2.8 billion across eight funding rounds since 2016.
The IPO is targeting Nasdaq under the ticker CBRS in Q2 2026, with Morgan Stanley as lead underwriter. The company aims to raise approximately $2 billion in the offering, implying a target valuation of $22–25 billion at the time of listing. An official IPO price range had not been set as of this writing.
Pre-IPO secondary market data from platforms including Hiive and Forge Global shows significant investor appetite: shares reportedly gained approximately 120% on secondary markets in the 90 days leading up to the planned listing15. Secondary market prices are indicative only and may not reflect the final IPO price.
Key questions for IPO investors include: whether the 2025 financials in the new S-1 show continued revenue growth post-G42 concentration; how the $10 billion OpenAI deal is structured for revenue recognition over 2026-2028; how quickly Cerebras can diversify beyond one anchor customer; and whether the wafer-scale architecture can be manufactured at sufficient yield and scale to support expansion.
On the OpenAI relationship specifically: OpenAI itself has invested in and deployed infrastructure from multiple competing providers — Microsoft Azure, Cerebras, and reportedly others. OpenAI's growing infrastructure diversification strategy is detailed in our analysis of the $122 billion OpenAI funding round and superapp ambitions.
References
Footnotes
-
SeaMicro acquires startup history — Mission.org interview with Andrew Feldman ↩
-
WSE-3 specs: 900,000 cores, 4 trillion transistors — WCCFTech ↩
-
OpenAI turns to Cerebras to scale AI inference infrastructure — Network World ↩
-
Cerebras CFIUS review: G42 and the national security concern — BNN Bloomberg ↩
-
Cerebras S-1 customer concentration — Tanay Jaipuria breakdown ↩
-
Nvidia strikes $20 billion licensing and asset deal with Groq — Fortune ↩
-
Cerebras secures $1B Series H at $23B valuation — IndexBox ↩