Stanford AI Index 2026: US-China Gap Shrinks to 2.7 Points

April 16, 2026

Stanford AI Index 2026: US-China Gap Shrinks to 2.7 Points

Stanford's Institute for Human-Centered AI (HAI) released its 2026 Artificial Intelligence Index Report this week, and the headline is a race that has effectively closed. On Arena, the top US model leads the top Chinese model by roughly 2.7%, global corporate AI investment more than doubled to $581.7 billion, and the Foundation Model Transparency Index fell from 58 to 40 as the most capable labs disclosed the least.1

What You'll Learn

  • How Stanford measures the US-China AI gap and why it shrank to 2.7 points
  • The benchmark jumps that prompted the report's "historic" capability framing
  • Enterprise adoption at 88% and where the productivity gains actually land
  • Why the Foundation Model Transparency Index fell so sharply this year
  • Environmental and public-trust signals the report says defenders should watch

TL;DR

Stanford HAI's 2026 AI Index, released April 13, 2026, documents the fastest single-year capability jump on a flagship benchmark the report has tracked — SWE-bench Verified top scores moved from ~60% at the end of 2024 to ~80–94% on today's leading models, with the Index using "near 100%" as its editorial framing. Frontier performance is consolidating: on Arena, Claude Opus 4.6 (1,503) leads ByteDance's Dola-Seed-2.0-Preview (1,464) by roughly 2.7% — down from benchmark spreads of 17.5 to 31.6 percentage points across MMLU, MATH, and HumanEval at the end of 2023. US private AI investment was $285.9 billion in 2025, versus $12.4 billion in China — a roughly 23× gap in private capital. Yet China leads on publication volume (23.2% of global AI papers) and patent grants (69.7% of global grants). Organizational AI adoption is 88%, but the report says productivity gains are concentrated in a small leading cohort. The Foundation Model Transparency Index dropped from 58 to 40, with Meta falling from 60 to 31 and Mistral from 55 to 18. AI data-center power capacity reached 29.6 GW, and documented AI incidents rose to 362 from 233 in 2024.2

What the AI Index Actually Is

The AI Index is Stanford HAI's annual attempt to measure the state of AI across technical performance, economics, responsible AI, policy, education, science, and public opinion. The 2026 edition was released in April 2026 and is produced by the AI Index steering committee at Stanford's Institute for Human-Centered AI. It is one of the most widely cited public datasets on the sector and is treated as a reference point by policy-makers, enterprises, and the research community.3

Two things make the 2026 edition different from prior years. First, the capability numbers are no longer incremental — the report calls them a "historic" jump on flagship benchmarks. Second, the responsibility and transparency numbers moved in the opposite direction, and the report is unusually direct about the gap.

Capability: The "Historic" Benchmark Jump

The 2026 Index reports that frontier models now meet or exceed human baselines on PhD-level science questions, multimodal reasoning, and competition mathematics. The headline number is SWE-bench Verified — a real-world software engineering benchmark where top scores moved from roughly 60% at the end of 2024 to near 100% by the end of 2025 in the Index's framing.4

That language is the AI Index's editorial characterization, and public leaderboards are somewhat more conservative than "near 100%." As of March 2026, Claude Opus 4.5 led SWE-bench Verified at 80.9%, with Claude Opus 4.6 close behind at 80.8% and Gemini 3.1 Pro at 80.6%. By mid-April 2026, Claude Mythos Preview — an unreleased Anthropic model accessible only through Project Glasswing — had pushed the ceiling to 93.9%, followed by GPT-5.3 Codex at 85%.5 Either way, a benchmark that sat at roughly 62% at the end of 2024 is now being cleared at 80–94% by the leading models one year later — a single-year jump that is what the Index's "historic" framing is pointing at.

The practical implication of that movement is what the Index leans on throughout the capability section: a capability that looked like an active research problem two years ago is becoming a deployed product feature, and the pace of change has not yet slowed.

The US-China Gap Shrinks to 2.7 Points

The most cited chart in the report is Stanford's tracker of US-versus-China model performance on Arena, the community-driven pairwise-comparison leaderboard. As of March 2026, Anthropic's Claude Opus 4.6 scores 1,503 on Arena — leading ByteDance's Dola-Seed-2.0-Preview (1,464) by roughly 2.7%. Broader Arena Elo ratings put Anthropic (1,503), xAI (1,495), Google (1,494), OpenAI (1,481), Alibaba (1,449), and DeepSeek (1,424) all in the top tier.6

That gap is striking because it used to be huge. At the end of 2023, the spread between the top US and Chinese models was:

BenchmarkEnd-of-2023 US-China gap (percentage points)
MMLU17.5
MATH24.3
HumanEval31.6

The capital picture, by contrast, is still extremely lopsided. In 2025, US private AI investment was $285.9 billion, versus $12.4 billion in China — a roughly 23× multiple. Global corporate AI investment reached $581.7 billion, up 130% year over year; global private investment alone was $344.7 billion, up 127.5%. US AI companies captured roughly 83% of that global private investment figure. For scale, our earlier breakdown of the $700 billion AI infrastructure race argued that much of the current capex cycle is effectively a single-country compute build.7

So the 2026 picture is: the US has a dominant private-capital position and still produces more top-tier frontier models (Epoch AI counts 50 notable US models in 2025 versus 30 from China), while China leads on publication volume (23.2% of global output), patent grants (69.7% of global grants), citation counts, and industrial robot installations.8

Adoption: 88% of Organizations, but the Gains Are Concentrated

Organizational AI adoption reached 88% in 2025, per the Index. Four out of five university students now use generative AI, and the report estimates US consumer surplus from generative AI tools at $172 billion annually by early 2026, with the median per-user value tripling between 2025 and 2026.9

The headline that will land hardest in boardrooms: generative AI reached 53% global population adoption within three years — a curve faster than either the personal computer or the consumer internet reached at the same age.10

But the report is careful about what adoption means. It flags that returns are concentrated in a leading cohort of companies and that about one-third of surveyed organizations expect AI to reduce their workforce in the coming year, with anticipated reductions highest in service operations, supply chain, and software engineering.11 "Everyone uses AI" and "most organizations have not changed how work gets done" are both in the report, and the Index treats the distance between them as an execution question, not a capability one.

Transparency: The Foundation Model Transparency Index Drops from 58 to 40

The 2026 Index's starkest negative chart is responsible AI. The Foundation Model Transparency Index (FMTI) — a measure of how openly major AI companies disclose training data, compute, capabilities, risks, and usage policies — fell from 58 to 40 on average. In practice, that reverses the progress observed in 2024 and brings the average back close to the 37 average recorded when the index launched in 2023.12

Two things stand out in the 2025 FMTI breakdown:

GroupAverage scoreCompanies
Top scorers78IBM, Writer, AI21 Labs
Middle tier36Anthropic, Google, Amazon, OpenAI, DeepSeek, Meta, Alibaba
Bottom tier15Mistral, Midjourney, xAI

IBM scored 95/100 — the highest score in the Index's history.13 At the other end, Meta's score fell from 60 to 31 and Mistral's from 55 to 18. Google, Anthropic, and OpenAI have all abandoned the practice of publishing training-dataset sizes and training-duration details for their latest frontier models.14

The AI Index's own framing is direct: "Responsible AI is not keeping pace with AI capability." It pairs that with a separate count — documented AI incidents rose to 362 in 2025, up from 233 in 2024 — and reads the two numbers together as evidence that the gap between what frontier systems can do and what is publicly known about them is widening at exactly the wrong moment.15

Environment: 29.6 GW and Growing

The environmental chapter lands harder in the 2026 edition than in prior years. AI data-center power capacity reached 29.6 GW, roughly comparable to peak electricity demand for the entire state of New York. Grok 4's estimated training emissions came in at 72,816 tons of CO₂ equivalent, which the report compares to emissions from driving roughly 17,000 cars for a year.16

The Index does not treat those numbers as a case for slowing down. It treats them as a case for paying attention: the scaling curve is running into grid-scale constraints, and any serious enterprise AI strategy in 2026 has to plan around power, water, and siting, not just GPU cloud cost and allocation.

Public Opinion: Experts and Everyone Else See Different AIs

The public-opinion chapter is the one that most directly challenges the industry's own view of itself. 59% of people globally report feeling optimistic about AI's benefits — up from 55% — and 52% say they feel nervous about it, up from 50%.17

Inside those averages is a US expert-versus-public gap the report highlights explicitly. 73% of US experts view AI's impact on the job market positively, but only 23% of the US general public does. Only 33% of Americans expect AI to make their jobs better; the global average is 40%. US trust in its own government to regulate AI sits at 31%, the second-lowest figure in Stanford's surveyed country list — only China (27%) scored lower.18

The interpretation the Index offers: experts and the public are not disagreeing about what AI can do. They are disagreeing about who it is being built for.

Science: AI Is Inside Natural-Sciences Research

In 2025, more than 80,000 papers, preprints, and other publications in the natural sciences mentioned AI — a 26% year-over-year increase. The proportion of publications mentioning AI now ranges from 6% to 9% across individual natural-sciences fields. Physical sciences had the largest absolute volume at about 33,000 publications, and Earth sciences had the highest share at 9%.19

AI-related computer science publications have also more than doubled over the past decade, from 102,000 to 258,000.20 This is not the "AI is changing research" story of 2022. It is the "AI is already inside research" story of 2026.

Bottom Line

The 2026 AI Index describes a sector in which capability, capital, and adoption are all moving faster than transparency, public trust, and environmental footprint are adjusting. On Arena, the US leads China by roughly 2.7% — a real lead, but narrower than it has ever been, and measured on only one of several dimensions (capital, infrastructure, publications, patents, talent, and deployment) where the competition actually plays out. Frontier models are saturating long-running benchmarks. Organizational adoption is 88%, and concentrated returns suggest most of the productivity upside has not yet been captured. The Foundation Model Transparency Index at 40 is the number the Index's authors treat as the warning sign: capability is getting easier to measure at the same time that the systems themselves are getting harder to audit. That is the 2026 gap the report wants policymakers, enterprises, and the public to watch.

Footnotes

  1. Stanford HAI, "Inside the AI Index: 12 Takeaways from the 2026 Report" — hai.stanford.edu.

  2. Stanford HAI, "The 2026 AI Index Report" — hai.stanford.edu/ai-index/2026-ai-index-report; IEEE Spectrum coverage of the 2026 AI Index — spectrum.ieee.org.

  3. Stanford HAI, "AI Index" — hai.stanford.edu/ai-index.

  4. Stanford HAI, "Inside the AI Index: 12 Takeaways from the 2026 Report" — SWE-bench Verified capability chart.

  5. Epoch AI, SWE-bench Verified benchmark tracking — epoch.ai/benchmarks/swe-bench-verified; public SWE-bench Verified leaderboards as of March–April 2026.

  6. Stanford HAI, 2026 AI Index — Arena-based US-China model performance tracker.

  7. Stanford HAI, 2026 AI Index — corporate and private AI investment figures (2025); IEEE Spectrum coverage.

  8. Stanford HAI, 2026 AI Index — notable model counts (Epoch AI), publication share, patent grant share; Xinhua coverage — english.news.cn.

  9. Stanford HAI, "Inside the AI Index: 12 Takeaways from the 2026 Report" — organizational adoption and consumer surplus figures.

  10. Stanford HAI, 2026 AI Index — generative AI population adoption rate.

  11. Stanford HAI, 2026 AI Index — workforce expectations section.

  12. Stanford HAI, "Transparency in AI is on the Decline" — hai.stanford.edu; Stanford HAI, 2026 AI Index — Responsible AI chapter.

  13. Stanford HAI, 2026 Foundation Model Transparency Index — top, middle, and bottom tier scoring; IBM 95/100 record.

  14. Stanford HAI, 2026 Foundation Model Transparency Index — Meta and Mistral scoring changes.

  15. Stanford HAI, 2026 AI Index — documented AI incidents count (362 in 2025 vs. 233 in 2024).

  16. Stanford HAI, 2026 AI Index — environmental impact chapter (Grok 4 training emissions; data-center power capacity).

  17. Stanford HAI, 2026 AI Index — global public-opinion survey results.

  18. Stanford HAI, 2026 AI Index — US expert-vs-public sentiment; US government regulation trust figure.

  19. Stanford HAI, "Science — The 2026 AI Index Report" — hai.stanford.edu/ai-index/2026-ai-index-report/science.

  20. Stanford HAI, 2026 AI Index — AI-related CS publication growth (decade).

  21. Stanford HAI, "The 2026 AI Index Report" — hai.stanford.edu/ai-index/2026-ai-index-report.

  22. Stanford HAI, 2026 AI Index — Arena US-China performance tracker; MMLU/MATH/HumanEval end-of-2023 spreads.

  23. Stanford HAI, 2026 Foundation Model Transparency Index — index methodology and scoring.

  24. Stanford HAI, 2026 AI Index — organizational adoption and workforce expectations.

  25. Stanford HAI, "Inside the AI Index: 12 Takeaways from the 2026 Report" — hai.stanford.edu.

Frequently Asked Questions

The 2026 AI Index Report was released in April 2026 by Stanford University's Institute for Human-Centered AI (HAI). It is the ninth edition of Stanford's annual report and is produced by the AI Index steering committee, with contributions from HAI's research staff and partner institutions.21

FREE WEEKLY NEWSLETTER

Stay on the Nerd Track

One email per week — courses, deep dives, tools, and AI experiments.

No spam. Unsubscribe anytime.