Trump's FDA-Style AI Order: 2026 Mythos Policy Reversal
May 9, 2026
TL;DR
The Trump White House is drafting an executive order that would put new AI models through an FDA-style safety review before public release, National Economic Council Director Kevin Hassett told Fox Business on Wednesday, May 6, 2026.1 The New York Times reported two days earlier that the administration is weighing an executive order to create an AI working group and a formal government review process for new AI models, with White House officials having briefed executives at Anthropic, Google, and OpenAI on the plans.2 The trigger: Anthropic's Claude Mythos Preview, announced April 7, 2026, which autonomously found thousands of zero-day vulnerabilities — including a 27-year-old OpenBSD bug and 271 flaws in Firefox.34 If signed, the order would mark the sharpest reversal yet of an administration that revoked Biden's AI safety order on day one in January 2025.
What You'll Learn
- What an FDA-style AI executive order would actually require, and what is still undecided
- How Anthropic's Claude Mythos Preview triggered the policy reversal
- What CAISI's new pre-deployment agreements with Google DeepMind, Microsoft, and xAI mean
- Why this represents a sharp reversal from EO 14179 and the July 2025 AI Action Plan
- What the "six-to-twelve-month window" warning from Anthropic's CEO means for software defenders
- What companies and developers should prepare for as the order takes shape
What the FDA-Style AI Executive Order Would Do
Speaking on Fox Business's "Mornings with Maria" on Wednesday, May 6, 2026, Hassett described the proposal in terms borrowed directly from drug regulation. The administration is "studying possibly an executive order to give a clear roadmap to everybody about how this is going to go and how future AIs that also potentially create vulnerabilities should go through a process so that they're released in the wild after they've been proven safe, just like an FDA drug."1
Three things are clear from the public reporting so far. First, the framing is explicit about pre-release safety review — not post-deployment audits or voluntary disclosure. Second, Hassett said it is "really quite likely" that any testing requirements would extend to all AI companies, not just the frontier labs that already have voluntary arrangements with the U.S. government.5 Third, the New York Times reporting from May 4 — sourced to people briefed on conversations between Anthropic, Google, and OpenAI executives and members of the administration — indicates the proposal under consideration includes an AI working group and a formal government review process for new AI models, with the working group itself helping define which agencies are involved.2
What is not clear is whether the testing would be mandatory. As of this writing, multiple outlets reporting on the draft note that this question remains open inside the administration, with some officials preferring a light-touch approach and others pushing for aggressive, mandatory vetting.6 That distinction matters enormously: a mandatory pre-release review would be the most consequential AI regulation in U.S. history, while a voluntary expansion would essentially codify what is already happening on a handshake basis.
Why Anthropic's Mythos Triggered the Policy Reversal
To understand why the administration moved, you have to understand what Mythos can do.
Anthropic announced Claude Mythos Preview on April 7, 2026, describing it as a general-purpose language model that turned out, during testing, to be unexpectedly strong at offensive cybersecurity.3 In the weeks that followed, Anthropic disclosed that Mythos Preview had autonomously identified and exploited zero-day vulnerabilities across every major operating system and browser — including a now-patched 27-year-old TCP SACK flaw in OpenBSD, a 16-year-old H.264 codec bug in FFmpeg, and a 17-year-old NFS remote-code-execution flaw in FreeBSD now tracked as CVE-2026-4747.7
The Firefox numbers landed hardest. Mozilla disclosed that Firefox 150 shipped with fixes for 271 vulnerabilities identified by Mythos Preview during early evaluation — a dramatic jump from the 22 security-sensitive bugs Anthropic's prior Opus 4.6 model had found in Firefox 148 a few months earlier.4
Anthropic does not plan to make Mythos Preview generally available.3 Instead, alongside the Mythos Preview disclosure on April 7, 2026, the company launched Project Glasswing, a coalition of 12 launch partners with direct access to the model: AWS, Apple, Anthropic, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks.8 More than 40 additional organizations that maintain critical software infrastructure have been granted extended access. Anthropic committed up to $100 million in usage credits across these efforts and $4 million in direct donations to open-source security work.8
The proximate political moment came on May 5, 2026, when Anthropic CEO Dario Amodei publicly described a "moment of danger" — a six-to-twelve-month window during which institutions need to patch the tens of thousands of vulnerabilities Mythos has surfaced before adversarial AI catches up.9 Amodei estimated that other Western frontier labs are one to three months behind Mythos in offensive cyber capability, while Chinese models are six to twelve months behind.10 By Wednesday, Hassett was on TV invoking the FDA.
CAISI Already Has Pre-Deployment Agreements With Three Labs
The executive-order discussion did not come from nowhere. On Tuesday, May 5, 2026 — the day before Hassett's Fox Business appearance — the Commerce Department announced that the Center for AI Standards and Innovation (CAISI), housed within the National Institute of Standards and Technology, had signed expanded collaborations with Google DeepMind, Microsoft, and xAI for pre-deployment evaluations of frontier AI models.11
The new agreements have several notable properties. The models will be tested in classified environments, and the evaluation focus is explicitly on "demonstrable risks" — cybersecurity, biosecurity, and chemical weapons risks rather than general capability or bias.12 CAISI has so far conducted around 40 such evaluations, including on some models that have yet to be released.6 The new deals build on the original 2024 voluntary agreements with OpenAI and Anthropic — the first of their kind — which the Trump administration inherited and chose to extend rather than dismantle.11
What the May 5 agreements do not do is make testing mandatory. CAISI's role is still consultative, and labs participate voluntarily. The proposed FDA-style executive order would be the mechanism to convert that voluntary regime into a binding one — if the administration chooses to take that step.
A Sharp Reversal: From EO 14179 to FDA-Style Vetting
The reversal is what makes this story politically significant. The Trump administration's AI policy has, until this month, been defined by deregulation:
| Date | Action | Direction |
|---|---|---|
| Jan 20, 2025 | Trump rescinds Biden's EO 14110 (October 30, 2023) within hours of taking office | Deregulatory |
| Jan 23, 2025 | Trump signs EO 14179, "Removing Barriers to American Leadership in Artificial Intelligence" | Deregulatory |
| Jul 23, 2025 | White House releases "Winning the Race: America's AI Action Plan" with 90 federal policy positions | Deregulatory |
| Dec 11, 2025 | Trump signs EO 14365, "Ensuring a National Policy Framework for AI," targeting state AI laws via federal preemption | Deregulatory (but federalizing) |
| May 6, 2026 | Hassett floats FDA-style pre-release vetting executive order | Reversal toward oversight |
The proposed order would not undo any of the prior deregulatory moves outright, but it would reintroduce a mechanism — pre-release safety review — that EO 14110 had partially established and that was a centerpiece of why Trump rescinded it. EO 14110 imposed Defense Production Act reporting requirements on labs training large dual-use foundation models; the FDA-style order being studied would go further by gating release on a federal review.
That is why coverage from outlets across the spectrum has framed this week as the Trump administration "embracing AI oversight ideas it once rejected."13 The framing is accurate: the administration that spent more than fifteen months arguing federal AI regulation was a barrier to American leadership is now arguing that federal AI regulation is the prerequisite for it. The variable that changed is Mythos.
The Six-to-Twelve-Month Window: Why Timing Matters
Amodei's "moment of danger" framing is doing real work in the policy debate, and it is worth taking seriously rather than dismissing as marketing.
The argument runs as follows. Mythos has surfaced a backlog of decades-old vulnerabilities that defenders did not know about. As long as Mythos and other Mythos-class models stay restricted to defenders — Project Glasswing partners, government agencies, vetted researchers — the patch cycle can run ahead of attackers. Once a comparably capable model leaks, is independently developed by a less safety-conscious lab, or is replicated in China, that asymmetry collapses.9
The U.K. AI Security Institute's evaluation, published in April 2026, gave the warning empirical weight. AISI found that Mythos Preview succeeded on 73% of expert-level capture-the-flag challenges and was the first model to autonomously complete its 32-step corporate network attack simulation end-to-end — a multi-step exercise AISI estimated would take a human operator around 20 hours.14
For a U.S. policymaker, this is the case for moving quickly. The FDA-style executive order is, in effect, the structural complement to Project Glasswing: Glasswing tries to use Mythos-class capability for defense before attackers have it; the executive order tries to make sure the next Mythos-class model — whoever builds it — does not show up in the wild without a federal safety review first.
What This Means for AI Companies and Developers
If the order is signed and is broadly modeled on Hassett's description, here is the realistic operational picture:
- Frontier labs already in the CAISI program (Anthropic, OpenAI, Google DeepMind, Microsoft, xAI) face the smallest near-term disruption. Their existing agreements would likely be the template for a more formal regime.
- Smaller frontier labs and open-weight model developers face the most uncertainty. Whether the order applies to open-weight releases — and to fine-tunes built on top of base models — is one of the unresolved questions inside the administration.
- Enterprise AI buyers should expect a procurement-relevant signal: "evaluated by CAISI" or "subject to federal pre-release review" will become a meaningful column in vendor due-diligence checklists, similar to FedRAMP authorization for cloud services.
- Security teams should not wait for the executive order to act on the Mythos disclosures. The Project Glasswing program is already pushing patches into critical infrastructure, but the long tail of self-hosted software remains exposed. The vulnerability classes Mythos found — semantic logic flaws that fuzzers missed — are the ones every security team should be re-prioritizing now.
The biggest open question is the one Hassett deliberately did not answer: whether the testing regime will be mandatory or voluntary. A voluntary regime essentially codifies the May 5 CAISI announcement and extends it to additional labs. A mandatory regime is genuinely new in U.S. AI policy and would require Congress's eventual blessing to survive litigation.
Bottom Line
May 6, 2026 was the day a deregulatory administration started talking about AI like it talks about pharmaceuticals. The trigger was a single Anthropic model that found decades-old bugs in software running on most of the world's computers. Whether the executive order that ultimately emerges is mandatory or voluntary, narrowly scoped to security risks or broadly applied to all frontier capability, will determine whether this is a one-off response to one model or the start of a durable U.S. AI safety regime.
For now, the practical effect is already visible: Google DeepMind, Microsoft, and xAI just joined OpenAI and Anthropic in the CAISI program; Project Glasswing partners are racing to patch the Mythos-found vulnerabilities; and the rest of the industry is reading the same New York Times story and trying to figure out what compliance is going to look like.
If you want context on how Mythos got here, the original Project Glasswing announcement and the AISI independent evaluation are the primary sources behind this week's headlines. The jagged-frontier analysis from AISLE is the strongest counterargument that smaller, unrestricted models can already do more of what Mythos can do than the White House framing implies — which is itself a reason the executive-order debate matters.
Footnotes
-
"Hassett: White House may review AI models 'like an FDA drug.'" The Hill, May 6, 2026. https://thehill.com/policy/technology/5866292-white-house-ai-evaluation-process/ ↩ ↩2
-
New York Times reporting, May 4, 2026 (https://www.nytimes.com/2026/05/04/technology/trump-ai-models.html), as summarized in Pearl, Mike. "Trump Reportedly Considering Executive Order Aimed at Vetting New AI Models." Gizmodo, May 5, 2026. https://gizmodo.com/trump-reportedly-considering-executive-order-aimed-at-vetting-new-ai-models-2000754493 ↩ ↩2 ↩3
-
Anthropic Frontier Red Team. "Assessing Claude Mythos Preview's cybersecurity capabilities." red.anthropic.com, April 7, 2026. https://red.anthropic.com/2026/mythos-preview/ ↩ ↩2 ↩3 ↩4
-
Holley, Bobby. "The zero-days are numbered." Mozilla Blog, April 21, 2026. https://blog.mozilla.org/en/privacy-security/ai-security-zero-day-vulnerabilities/ ↩ ↩2
-
"White House Prepares Order to Boost AI Security, Hassett Says." Bloomberg, May 6, 2026. https://www.bloomberg.com/news/articles/2026-05-06/white-house-preps-order-to-boost-ai-security-hassett-says ↩
-
"WH 'studying' AI security executive order." Federal News Network, May 6, 2026. https://federalnewsnetwork.com/artificial-intelligence/2026/05/wh-studying-ai-security-executive-order/ ↩ ↩2 ↩3
-
Anthropic. "Project Glasswing: Securing critical software for the AI era." https://www.anthropic.com/glasswing ↩
-
Anthropic. Project Glasswing program announcement, April 7, 2026. https://www.anthropic.com/glasswing ↩ ↩2
-
"Anthropic CEO warns of cyber 'moment of danger' as AI exposes thousands of vulnerabilities." CNBC, May 5, 2026. https://www.cnbc.com/2026/05/05/anthropic-ceo-cyber-moment-of-danger-mythos-vulnerabilities.html ↩ ↩2 ↩3
-
"Anthropic CEO Predicts Firms Have 6 Months to Patch Software Vulnerabilities." PYMNTS, May 2026. https://www.pymnts.com/artificial-intelligence-2/2026/anthropic-ceo-predicts-firms-have-6-months-to-patch-software-vulnerabilities/ ↩
-
"NIST's CAISI Announces New Frontier AI Testing Agreements with Google DeepMind, Microsoft, xAI." HPCwire, May 5, 2026. https://www.hpcwire.com/off-the-wire/nists-caisi-announces-new-frontier-ai-testing-agreements-with-google-deepmind-microsoft-xai/ ↩ ↩2 ↩3
-
"Commerce AI center will evaluate Google DeepMind, Microsoft and xAI models." Nextgov/FCW, May 5, 2026. https://www.nextgov.com/artificial-intelligence/2026/05/commerce-ai-center-will-evaluate-google-deepmind-microsoft-and-xai-models/413349/ ↩
-
"Trump administration suddenly embraces AI oversight ideas it once rejected." Fortune, May 6, 2026. https://fortune.com/2026/05/06/trump-administration-embraces-ai-oversight-policies-it-once-rejected-anthropic-mythos-caisi/ ↩ ↩2
-
"Our evaluation of Claude Mythos Preview's cyber capabilities." AI Security Institute (AISI), April 13, 2026. https://www.aisi.gov.uk/blog/our-evaluation-of-claude-mythos-previews-cyber-capabilities ↩
-
"Executive Order 14110." Wikipedia. https://en.wikipedia.org/wiki/Executive_Order_14110 ↩