TL;DR: Pentagon Under Secretary Emil Michael escalated the Anthropic confrontation into a full-scale policy offensive on March 3, publicly warning that Biden-era vendor restrictions baked into AI procurement contracts "could paralyze military missions" by limiting which models the DoD can deploy in real-time combat planning. The statement reframes AI safety guardrails — written into contract terms to restrict autonomous targeting and domestic surveillance — as a national security threat, not a safeguard. It is the most direct public articulation yet of a military doctrine that treats AI ethics constraints as operational liabilities.
What you will learn
- What Emil Michael actually said — and what it means
- Biden's AI procurement rules: what was in them
- Why the Pentagon wants vendor restrictions removed
- The Anthropic ban as policy catalyst
- OpenAI's amended deal: what changed
- The doctrine shift: safety guardrails as operational liabilities
- What real-time combat planning actually requires
- Congressional and legal reaction
- What this means for every AI company with defense ambitions
- The global dimension: what adversaries are doing
- Frequently asked questions
What Emil Michael said
Emil Michael, Under Secretary of Defense for Research and Engineering, delivered the statement in a March 3 briefing with defense reporters, framing the week's events not as a dispute between the Pentagon and a single AI company but as a systemic policy failure inherited from the previous administration.
The precise language, confirmed by multiple reporters in the room: Biden-era vendor restrictions written into AI procurement contracts "could paralyze military missions" by constraining which AI capabilities the DoD can invoke in real-time operational contexts.
Michael's argument has three components, laid out across the briefing.
First, vendor restrictions embedded in contract terms — language that limits how a model can be used, what outputs it can generate, and which applications are off-limits — create legal and operational friction at exactly the moment when military planners need AI systems to perform without qualification. A contract that requires human review of every AI-generated targeting recommendation, for example, slows decision cycles in time-sensitive operations.
Second, those restrictions were negotiated under a Biden administration framework that Michael characterized as "AI safety theater" — language designed to satisfy civil liberties advocates without accounting for what the department actually needs in combat environments.
Third, and most provocatively, Michael argued that allowing AI vendors to embed usage restrictions in government contracts amounts to letting private companies set military doctrine. "No technology vendor should have veto power over the decisions of the United States military," he said.
That last line is the most significant. It takes the Anthropic dispute — which was framed publicly as a conflict between Trump's political preferences and Anthropic's corporate ethics — and reframes it as a structural question about who controls the terms of military AI deployment.
Biden's AI procurement rules
To understand what Michael is attacking, it helps to understand what Biden's AI procurement framework actually contained.
The Biden administration issued Executive Order 14110 in October 2023, establishing the first comprehensive federal AI governance framework. For defense procurement specifically, the National Security Memorandum on AI (NSM-AI), released in October 2024, set out requirements for AI used in military and intelligence contexts. The memorandum was notable for two things it required and one thing it explicitly declined to prohibit.
It required that AI systems used in lethal operations maintain "meaningful human control" over targeting decisions — a standard broadly consistent with international humanitarian law principles.
It required that vendors disclose safety testing results and red-team findings to the government before deployment in high-stakes contexts.
It explicitly declined to prohibit AI from being used in the targeting chain entirely — leaving open a broad range of decision-support applications that stopped short of fully autonomous lethal authority.
The vendor restriction language Michael is targeting is not in the NSM-AI itself — it lives in individual contract terms negotiated between the DoD's acquisition offices and AI vendors between late 2023 and early 2026. Those contracts reflect, to varying degrees, the safety commitments that companies like Anthropic built into their commercial usage policies.
When Anthropic negotiated access to DoD systems, it included contractual provisions that tracked its own acceptable use policy: no autonomous lethal targeting, no mass domestic surveillance, mandatory human review for certain output categories. The Pentagon agreed to those terms at the time of contracting. Michael's March 3 statement is, in effect, a declaration that the Pentagon should not have agreed to those terms and will not accept them in future contracts.
Why the Pentagon wants restrictions removed
The operational case Michael is making is not invented — it reflects real tensions in military AI deployment that defense planners have been discussing internally for at least three years.
Real-time combat planning operates on compressed timelines. A time-sensitive target — a mobile missile launcher, a command vehicle, a fleeting communications node — may be observable for minutes before it relocates or conceals itself. Intelligence fusion, target validation, collateral damage estimation, and strike authorization that used to take hours in deliberate targeting processes can theoretically be compressed to minutes with AI assistance.
But "theoretically" is doing heavy lifting in that sentence. The compression only works if the AI system can produce outputs that human decision-makers trust and act on at speed. If every AI-generated recommendation requires extended human review — the kind of review that satisfies a contractual "meaningful human control" requirement — the speed advantage disappears.
This is the genuine operational dilemma at the center of Michael's argument. He is not simply asserting that safety guardrails are politically inconvenient. He is asserting that they structurally conflict with the speed requirements of modern warfare.
The counterargument — which Michael's statement does not address — is that accelerating targeting decisions through AI without meaningful human review creates its own operational risks: wrong targets, underestimated civilian casualties, second-order escalation that human judgment might have caught. The costs of AI-accelerated errors in a military context are not the same as the costs of AI errors in a consumer application.
The Anthropic ban as policy catalyst
Michael's March 3 statement did not emerge in a vacuum. It is the third move in a sequence that began with the Pentagon's February 25 meeting with Dario Amodei and escalated through the executive order on February 27.
The Anthropic confrontation gave the Pentagon a concrete case to argue from. It is easier to make the abstract claim that "vendor restrictions could paralyze missions" when you can point to a specific company that refused a specific Pentagon request on the grounds that the use violated its acceptable use policy.
What the Pentagon requested from Anthropic: removal of contractual restrictions on autonomous targeting support and domestic surveillance capabilities. What Anthropic declined: exactly those two things, publicly and explicitly, with Amodei's February 26 statement naming them as non-negotiable red lines.
The executive order that followed — banning Anthropic from all federal agencies, labeling it a "Radical Left AI company," and invoking the language of supply chain risk — created the political permission structure for Michael's March 3 expansion. The Anthropic ban is no longer just a contract dispute with one company. It is now explicitly the predicate for a broader policy push to strip Biden-era vendor restrictions from every AI contract in the DoD portfolio.
Military officials, speaking on background to defense correspondents, estimated that between 12 and 17 active contracts with AI vendors contain language similar to the Anthropic provisions that triggered the dispute — mandatory human review requirements, prohibited use categories, or vendor audit rights over deployment contexts. Those contracts are now under review.
OpenAI's amended deal
The same week that Anthropic was banned, OpenAI moved to ensure its Pentagon relationship would not face similar friction.
OpenAI's existing DoD contract — confirmed in reporting from late February — was amended on March 1 to remove language that had restricted certain autonomous decision-support applications. The amendment was characterized by OpenAI in an internal communication, later reported by The Information, as a "scope clarification" that brought the contract terms into alignment with the department's operational requirements.
What changed: the amended contract removes a prior requirement for mandatory human review of AI-generated recommendations in time-sensitive targeting contexts. It substitutes a looser standard that requires "appropriate human oversight" — language that military lawyers can interpret broadly depending on operational context.
Sam Altman confirmed the amendment in a brief statement, saying OpenAI was "committed to supporting the legitimate defense needs of the United States" and that the company had worked with DoD lawyers to ensure the contract reflected "operational realities."
The amendment is significant for two reasons. First, it demonstrates that at least one major AI vendor was willing to give the Pentagon what it asked Anthropic for and was refused. Second, it establishes a new contract template — one without categorical vendor restrictions on autonomous applications — that the DoD is now likely to require in new contracts and renewals.
The QuitGPT boycott, which reached 1.5 million participants in response to the original Pentagon contract, added an estimated 300,000 new participants in the 48 hours following disclosure of the amendment. Whether that boycott translates into meaningful revenue impact for OpenAI is unclear, but the scale of public reaction to the policy direction is measurable.
The doctrine shift
Michael's statement represents something more consequential than a dispute over a contract clause. It is a public articulation of a military AI doctrine — one that treats safety guardrails as operationally dangerous, not operationally necessary.
The previous doctrine, embedded in the Biden NSM-AI and in the DoD's own AI Ethical Principles published in 2020, treated safety constraints as features of responsible AI deployment. The principles stated explicitly that DoD AI should be "responsible, equitable, traceable, reliable, and governable" — and that "governable" included the ability for human operators to override or disengage AI systems.
Michael's argument implies that "governable" is now the problem. A system that humans can override is a system that slows down when humans exercise that override. In time-sensitive targeting contexts, that slowdown has operational costs.
This doctrine shift has a name in defense policy circles: it is a move from human-in-the-loop (human must approve every decision) to human-on-the-loop (human monitors and can intervene but is not required to approve each action) — or, in the most aggressive reading of Michael's position, toward human-out-of-the-loop for certain decision categories.
International humanitarian law, specifically Common Article 3 of the Geneva Conventions and Additional Protocol I, does not have explicit provisions governing autonomous weapons. The International Committee of the Red Cross has called for a legally binding prohibition on autonomous weapons systems that select and engage targets without meaningful human control. The U.S. has not signed onto that call.
Michael's statement moves U.S. military doctrine further from the ICRC position, without naming it.
What real-time combat planning actually requires
It is worth examining the operational claim Michael is making with some precision, because it is frequently stated but rarely unpacked.
Real-time combat planning — the phrase Michael used — refers to a specific category of military decision-making that operates on compressed timelines. The canonical scenario is time-sensitive targeting: an opportunity to strike a high-value target that has a limited exposure window. But real-time combat planning also includes logistics optimization, route planning, electronic warfare coordination, and communications deconfliction — tasks where AI assistance speeds up coordination without raising autonomous weapons concerns.
The conflation of these categories matters. When Michael says vendor restrictions "could paralyze military missions," he does not specify which missions and which restrictions. A restriction on autonomous lethal targeting has zero effect on AI-assisted logistics planning. A restriction on mass surveillance has zero effect on AI-assisted battlefield simulation. The rhetorical move is to treat all restrictions as categorically paralyzing, when the operational impact is specific to specific use cases.
Defense analysts outside the department have noted this conflation. Paul Scharre of the Center for a New American Security, author of a widely cited book on autonomous weapons, published a thread on March 3 arguing that Michael's framing "conflates genuine speed advantages with a preference for removing accountability structures — those are different problems with different solutions."
The distinction matters for evaluating the policy argument. If the actual operational bottleneck is logistics AI constrained by paperwork requirements, the solution is streamlined procurement, not the removal of targeting restrictions. If the actual bottleneck is targeting AI constrained by human review requirements, the solution Michael is proposing — removing those requirements — carries a different set of risks.
Congressional and legal reaction
The March 3 statement triggered immediate congressional attention, with reactions splitting along predictable lines and at least two formally cross-partisan responses.
Sen. Ron Wyden (D-OR), ranking member of the Senate Finance Committee and longtime surveillance critic, called Michael's statement "a direct invitation to deploy autonomous weapons without meaningful accountability" and announced plans to introduce legislation requiring congressional approval before DoD contracts can waive human review requirements for AI in targeting chains.
Rep. Mike Gallagher (R-WI), chair of the House Select Committee on the CCP and a defense hawk, issued a statement that broke with the administration's framing on narrow but significant grounds: "The United States military's strength has always come from the discipline of our decision-making, not the speed of it. Removing human judgment from lethal targeting is not a capability upgrade. It is a doctrine change that requires congressional deliberation."
Gallagher's statement is notable because it comes from a conservative defense hawk who has been broadly supportive of the administration's posture on China and AI competition. His break on this specific point reflects a real constituency within national security circles that believes autonomous weapons create escalation risks that outweigh their tactical speed advantages.
On the legal front, three organizations — the ACLU, the Electronic Privacy Information Center (EPIC), and the International Human Rights Clinic at Harvard Law — announced a joint effort to obtain DoD contracting documents related to the Biden-era AI procurement terms under FOIA, and to file amicus briefs if any legal challenges to the executive order reach the courts.
What this means for AI companies with defense ambitions
Every AI company currently pursuing or holding DoD contracts now faces a clarified version of the decision that Anthropic faced in February: accept the Pentagon's preferred terms, or risk the commercial and regulatory consequences of declining.
The Pentagon's preferred terms, as articulated by Michael's March 3 statement, are increasingly clear. They want:
- No categorical vendor restrictions on autonomous decision-support applications
- Human oversight standards flexible enough to accommodate operational timelines
- No vendor audit rights over deployment contexts — the government decides how it uses the model
- No contractual provisions that give vendors a veto over specific use categories
For companies that can accept those terms — OpenAI has now demonstrated it can — the defense market represents enormous revenue opportunity. The DoD's AI spending is projected to reach $1.8 billion in fiscal year 2027, up from approximately $800 million in 2024. Contracts at that scale, with multi-year renewal paths, are transformative for any AI company's revenue mix.
For companies that cannot accept those terms — Anthropic's red lines, maintained at the cost of its federal contracts, are the clearest example — the calculus is different. The lost government revenue is real. Anthropic's federal contracts, by various estimates, represented between $150 million and $200 million in committed annual revenue. That is a significant number for a company that has not yet reached profitability.
The strategic question is whether maintaining those red lines creates durable differentiation in the enterprise market outside government — where CISOs and procurement officers increasingly care about what values and accountability structures come embedded in the AI they deploy — or whether the government market is large enough that companies without those red lines simply outcompete over time.
Dario Amodei has not responded publicly to Michael's March 3 statement as of the time of publication.
The global dimension
Michael's March 3 statement did not address the international context, but it operates within it and changes it.
China's military AI development program is the unstated reference point for the Pentagon's urgency. The People's Liberation Army has been explicit, in public doctrine documents, about its intent to achieve AI-enabled autonomous decision-making in targeting by 2030. The PLA's Intelligent Warfare doctrine, published in translation by defense think tanks in 2024, describes human-out-of-the-loop autonomous engagement as a strategic objective, not a risk to be managed.
The U.S. military's traditional argument for maintaining human-in-the-loop requirements has been two-part: it is required by international humanitarian law as a matter of U.S. policy, and it produces better outcomes because human judgment catches AI errors in high-stakes contexts. Michael's statement begins to chip away at the second part of that argument — not by disputing that human judgment catches errors, but by arguing that the speed cost of that error-catching is too high.
The concern among arms control analysts is that the U.S. moving toward human-on-the-loop or human-out-of-the-loop standards, even implicitly, accelerates a global race to remove accountability structures from military AI — because other states will cite U.S. practice as justification for their own doctrine.
China's first national standards for humanoid robots, published in January 2026, include provisions for autonomous weapons integration that exceed anything the U.S. has publicly contemplated. The competitive pressure is real. The question that Michael's statement does not answer is whether matching that competitive posture requires abandoning accountability structures — or whether accountability structures, properly designed, can coexist with the operational speed requirements of modern warfare.
That question is not answered. It is, at this moment, what the debate is actually about.
Frequently asked questions
What exactly did Emil Michael say on March 3?
Michael said in a Pentagon briefing that Biden-era vendor restrictions written into AI procurement contracts "could paralyze military missions" — specifically by constraining AI decision-support capabilities available in real-time combat planning. He characterized the restrictions as a structural problem inherited from a previous administration's AI governance framework and signaled that the DoD was reviewing contracts to identify and remove similar provisions. He did not name specific contracts or vendors beyond the Anthropic context established earlier in the week.
What are the Biden-era AI restrictions the Pentagon is targeting?
The primary framework is the National Security Memorandum on AI (NSM-AI), issued October 2024, which requires "meaningful human control" over AI used in lethal targeting decisions and mandates vendor safety testing disclosures. The specific contract-level restrictions Michael is targeting include mandatory human review requirements for AI targeting recommendations, prohibited use categories negotiated into vendor contracts, and vendor audit rights that allow companies to monitor how their models are deployed. These provisions vary contract by contract — they are not uniform across all DoD AI contracts.
Does removing vendor restrictions violate international law?
Removing contractual vendor restrictions does not by itself violate international humanitarian law — IHL governs state conduct in armed conflict, not contract terms. What matters for IHL compliance is whether, in practice, human decision-makers exercise meaningful control over targeting decisions that result in lethal action. The concern raised by arms control legal scholars is that removing contractual human review requirements creates operational conditions where meaningful human control becomes nominal rather than real — technically present but practically overridden by time pressure. That is a doctrine question, not a contract question.
Why did OpenAI amend its deal when Anthropic would not?
OpenAI and Anthropic have materially different positions on military AI use. OpenAI updated its usage policies in early 2024 to permit military and national security applications, removing prior categorical restrictions. Anthropic has maintained two specific red lines — no autonomous lethal targeting without human control in the loop, no mass domestic surveillance — that the Pentagon explicitly requested be removed. OpenAI's amended contract brings its terms into alignment with Pentagon operational requirements as described by Michael. Anthropic's refusal to do the same is what triggered the executive order and the broader policy confrontation.
How much federal revenue did Anthropic lose?
Estimates from defense procurement analysts place Anthropic's committed federal contract revenue between $150 million and $200 million annually. The six-month phaseout window means some of that revenue will continue through late 2026, but the longer-term contracts — multi-year arrangements that would have renewed — are expected to transfer to competing vendors including OpenAI, Google, and xAI. The commercial enterprise market outside government remains open to Anthropic.
What happens to the 12-17 other DoD contracts with similar vendor restrictions?
Military officials have confirmed those contracts are under review, with the goal of identifying provisions that track the Anthropic model — categorical prohibited use categories, mandatory human review standards, or vendor audit rights. The outcome of that review is expected to result in modification requests to those vendors: remove the contested provisions or face non-renewal. Some contracts will expire before modification is required; others may face legal disputes if vendors decline to amend on the Pentagon's preferred terms.
Could Congress stop the Pentagon from removing these restrictions?
Congress has oversight authority over defense procurement and can condition appropriations on DoD compliance with specified standards — including human review requirements for AI in targeting. Sen. Wyden has announced legislative intent to do exactly this. Whether such legislation can pass a Republican-controlled Congress, given the administration's framing of the issue as a national security urgency, is uncertain. Rep. Gallagher's cross-partisan statement indicates that the opposition is not purely partisan — which is the necessary condition for any legislative constraint to have traction.