Pentagon to Anthropic: restrict military AI use and expect punishment
The Pentagon warned Anthropic over its refusal to allow Claude for mass surveillance and autonomous weapons. Full breakdown.
Whether you're looking for an angel investor, a growth advisor, or just want to connect — I'm always open to great ideas.
Get in TouchAI, startups & growth insights. No spam.
TL;DR: Defense Secretary Pete Hegseth gave Anthropic a Friday deadline to remove guardrails on Claude or face a supply chain risk designation that would effectively blacklist the company from the defense ecosystem. Anthropic has two firm red lines: no mass surveillance of Americans, no autonomous weapons that fire without human oversight. As of today, February 26, 2026, it is not budging.
The dispute did not come out of nowhere. Anthropic has had a contract with the Pentagon's Chief Digital and AI Office worth up to $200 million, one of four contracts awarded alongside Google, xAI, and OpenAI to customize generative AI models for military use. The contracts were supposed to include usage-policy restrictions that both sides agreed to upfront.
The Pentagon changed its position.
Defense Secretary Pete Hegseth gave Anthropic CEO Dario Amodei a meeting on Tuesday, February 24. That meeting changed nothing. A Pentagon official then delivered an ultimatum: comply by 5:01 pm Friday, or face consequences. The demand was simple: give the military access to Claude for "all lawful use cases" without restriction.
Pentagon Chief Technology Officer Emil Michael put the administration's position in direct terms. He said it is "not democratic" for Anthropic to set limits on how its AI can be used. "Congress writes bills, the president signs them, agencies write regulations, and people comply," Michael said, according to Breaking Defense. "What we're not going to do is let any one company dictate a new set of policies above and beyond what Congress has passed."
That framing redefines the conflict. From Anthropic's perspective, these are usage restrictions the Pentagon agreed to in a signed contract. From the Pentagon's perspective under the current administration, a private company does not get to set boundaries on what the military can do with technology it is paying for.
"Anthropic has no plans to budge and adhere to the Pentagon's demands," CNN reports. More companies should tell government officials to pound sand in a large variety of situations." -- @JDTuccille
Anthropic's usage policies have explicit prohibitions. The company will not allow Claude to "produce, modify, design, or illegally acquire weapons" or to "track a person's physical location, emotional state, or communication without their consent."
In the context of the Pentagon dispute, those policies translate to two concrete limits:
Autonomous weapons. Anthropic will not allow Claude to make final targeting decisions in military operations without a human in the loop. This aligns with the definition of fully autonomous weapons systems under the Convention on Certain Conventional Weapons, which describes them as systems that can "identify, select, and engage a target, without intervention by a human operator in the execution of these tasks." Anthropic has sought contractual assurances that its models will not be used for offensive or lethal decision-making without clearly defined human oversight.
Mass domestic surveillance. Anthropic will not allow Claude to be used for bulk surveillance of American citizens. This is not a prohibition on all surveillance activities. Anthropic reportedly offered access for missile defense applications. But domestic mass surveillance, the kind that would involve monitoring American citizens at scale without consent or individualized suspicion, is off the table.
Dario Amodei reiterated both red lines after the Tuesday meeting with Hegseth. According to sources familiar with the discussions, Anthropic is not softening its position.
What is notable here is that Anthropic did not refuse the military outright. The company reportedly offered access to Claude for missile defense and other applications that do not cross its stated lines. The dispute is specifically about autonomous lethal weapons and mass surveillance, not about military use of AI in general.
"In a normal news cycle in a normal year, Anthropic versus the Pentagon would be the story of the year -- the military threatening a top A.I. lab over a defining question of our century's technology." -- @AlexPanetta
The two penalties on the table are different in kind.
The supply chain risk designation is the blunt instrument. Normally applied to foreign adversaries, this designation would require any company with a defense contract to certify that it does not use Anthropic's products in military work. The effect would be broad. Many large corporations hold defense contracts, which means a supply chain risk designation would pressure them to drop Anthropic from enterprise AI deployments entirely, not just defense ones. This could materially damage Anthropic's enterprise business at a time when it is competing hard against OpenAI and Google.
The Defense Production Act is a different threat entirely. The Act gives the president authority to direct domestic industries for national security purposes. Invoking it against Anthropic would mean compelling the company to allow its tools to be used by the military regardless of what Anthropic's own policies say. It would effectively override Anthropic's usage policy by executive fiat.
Legal scholars have noted that applying the Defense Production Act to an AI company in this way would be unprecedented and legally contested. A detailed analysis by Lawfare argues the Act was designed for physical production of goods and materials, not for compelling software companies to remove safety restrictions on AI models.
The question of whether it would hold up in court matters less in the short term than whether the threat itself creates sufficient pressure to move Anthropic.
The Anthropic situation does not exist in isolation. The tech industry has been wrestling with military AI for nearly a decade, and the pattern of conflict followed by accommodation, or principled exit, has played out before.
Google and Project Maven (2018). Project Maven was a Pentagon program to use machine learning to analyze drone surveillance footage and flag objects of interest for human analysts. Google's contract to supply technology and engineering support became public in early 2018. Over 3,100 employees signed an open letter to CEO Sundar Pichai stating that "Google should not be in the business of war." A dozen employees resigned over the contract. Google ultimately chose not to renew the Project Maven contract and published AI ethics principles that included commitments not to build weapons or enable surveillance "that violates internationally accepted norms." Google has since resumed defense work, but Project Maven became a reference point for what employee and public pressure can force a company to do.
Palantir: building for the battlefield. Palantir has taken the opposite approach, making defense AI a core business. In August 2025, the company landed a $10 billion Army software and data enterprise agreement. Palantir's Tactical Intelligence Targeting Access Node (TITAN) system uses AI to collect data from space sensors to assist with warfare strategy and strike targeting. The company delivered the first two TITAN systems to the U.S. Army in March 2025. In the current dispute, Palantir is caught in the middle: it provides the secure cloud infrastructure that allows the military to use Anthropic's Claude model, meaning the contract fight between Anthropic and the Pentagon directly affects Palantir's own relationship with both parties.
Microsoft and HoloLens IVAS. Microsoft won a $22 billion Army contract in 2021 to supply an augmented reality headset for soldiers, the Integrated Visual Augmentation System, based on HoloLens 2. The contract drew some internal criticism but not the mass protests Google faced. In February 2025, Microsoft transferred production and development of the IVAS hardware to Anduril, retaining a role providing cloud and AI services. The shift reflected engineering and delivery challenges with the physical hardware, not a policy change.
The through line: companies that want defense revenue have generally found ways to provide it. Companies that have drawn ethical lines, like Google in 2018, have faced short-term reputational gains but long-term pressure to re-engage. Anthropic is currently choosing the principled resistance path, and it is doing so while under a direct contractual and legal threat.
The substance of what Anthropic is refusing to enable matters, and it is worth being direct about what is actually being debated.
Autonomous weapons are systems that can kill without a human making the final decision to fire. International arms control experts, humanitarian organizations, and a significant portion of the AI research community have argued for years that these systems pose unique legal and ethical problems. Under international humanitarian law, responsibility for civilian casualties requires a human decision-maker who can exercise judgment. A fully autonomous system that selects and engages targets removes that accountability.
The International Committee of the Red Cross has called for legally binding rules on autonomous weapons. The Convention on Certain Conventional Weapons has been discussing, but not resolving, this issue for over a decade. The United States has not ratified any treaty banning autonomous weapons, and the current administration's posture suggests it is not inclined to do so.
Mass surveillance of American citizens raises separate but equally serious concerns. Bulk surveillance programs without individualized suspicion have been challenged under the Fourth Amendment. The NSA's bulk metadata collection program, revealed by Edward Snowden in 2013, was eventually found by a federal appeals court to be illegal. Using AI to conduct that surveillance at greater scale and lower cost would amplify both the capability and the constitutional concerns.
What the Pentagon is asking for, in effect, is for Anthropic to remove contractual barriers that would allow Claude to be used for applications that remain contested in international law and potentially illegal under U.S. constitutional protections.
The Pentagon's counter-argument is that it does not actually intend to use Claude for mass surveillance or fully autonomous weapons, and that Anthropic setting those limits is an overreach by a private company into military policy. That argument might carry more weight if the Pentagon had not responded to a refusal to remove those limits with an ultimatum.
The Anthropic-Pentagon standoff sets a precedent that every AI company with a government contract is watching carefully.
If Anthropic holds its position and suffers the supply chain designation, the message to the industry is clear: maintaining ethical limits on AI use will cost you commercially, and the current administration is prepared to use regulatory tools to enforce compliance. That may deter other companies from including usage restrictions in government contracts, even on applications the companies themselves believe are harmful.
If Anthropic caves, the message is different but equally significant: usage policy restrictions in AI contracts are negotiable under pressure, which means they were never real limits at all. Any future commitments by AI companies about what their models will not do become harder to take seriously.
If Anthropic holds and the legal system constrains the Defense Production Act application, it would establish that AI companies have some degree of contractual and legal protection for usage policies they negotiate in good faith. That outcome is the most uncertain and the most consequential.
The broader issue is that AI companies are increasingly being asked to make decisions about weapons, surveillance, and military force that were previously made entirely by governments and defense contractors. They have no democratic mandate to make those decisions, but they also hold the technology that makes certain capabilities possible. The Anthropic situation is an early test of whether that position comes with any real power to refuse.
| Company | Military contracts | Autonomous weapons | Mass surveillance | Usage restrictions |
|---|---|---|---|---|
| Anthropic | Yes ($200M CDAO deal) | ✗ Prohibited | ✗ Prohibited | Yes (contested) |
| Yes (resumed post-Maven) | Unclear | Unclear | Limited public commitments | |
| OpenAI | Yes ($200M CDAO deal) | Not publicly restricted | Not publicly restricted | Partial |
| Microsoft | Yes (IVAS, Azure DoD) | Not publicly restricted | Not publicly restricted | Partial |
| Palantir | Yes ($10B Army deal) | Core product capability | Core product capability | ✗ None |
| xAI | Yes ($200M CDAO deal) | Not publicly restricted | Not publicly restricted | Unclear |
The table above reflects publicly available positions as of February 26, 2026. "Unclear" and "Not publicly restricted" are not equivalent to endorsement. The honest read is that most AI companies have not drawn explicit lines.
The Pentagon demanded that Anthropic remove usage-policy restrictions on its Claude AI model and allow the military to use it for "all lawful use cases" without limitation. Defense Secretary Pete Hegseth gave Anthropic a Friday deadline to comply or face penalties including a supply chain risk designation and potential invocation of the Defense Production Act.
Anthropic has two explicit limits: it will not allow Claude to make final targeting decisions in military operations without a human in the loop (autonomous weapons), and it will not allow Claude to be used for mass domestic surveillance of American citizens. Anthropic reportedly offered to allow Claude for missile defense and other applications that fall outside these two prohibitions.
The Defense Production Act gives the president authority to direct domestic industries for national security purposes. The Pentagon threatened to invoke it to compel Anthropic to provide unrestricted military access to Claude. Legal experts at Lawfare have argued that using the Act against an AI company to override its usage policies would be unprecedented and legally questionable, since the Act was designed for physical goods production, not software or AI model restrictions.
A supply chain risk designation is typically applied to foreign adversaries and requires Pentagon contractors to certify they do not use the designated company's products in military work. If applied to Anthropic, it would pressure any company with a defense contract to drop Claude from enterprise deployments, significantly damaging Anthropic's commercial business beyond just defense work.
Google's Project Maven conflict in 2018 involved employee protests against a contract to apply machine learning to drone surveillance footage. Over 3,100 Google employees signed an open letter, a dozen resigned, and Google ultimately did not renew the contract. That dispute was driven by internal employee pressure. The Anthropic situation is different: it involves direct government threats against a company that is itself drawing policy lines, not just responding to employee pressure.
Palantir provides the secure cloud infrastructure that allows the U.S. military to run Anthropic's Claude model for defense applications. The Anthropic-Pentagon contract dispute directly affects Palantir, which is caught between two clients in conflict. Palantir itself has no usage restrictions on its military AI products and has a $10 billion Army enterprise agreement.
No. According to reporting from NBC News, Anthropic offered the Pentagon access to Claude for missile defense applications. The refusal is specifically targeted at autonomous weapons (systems that select and engage targets without human authorization) and mass domestic surveillance. Anthropic's position is not anti-military, but it has hard limits on two specific applications.
Pentagon CTO Emil Michael argued that Congress and the executive branch set policy, and that a private company drawing its own limits on military use overrides that democratic process. Anthropic's counter-position is that the Pentagon originally agreed to the usage restrictions in the signed contract, and that companies have a right to set terms on how their products are used regardless of who the buyer is.
The Pentagon threatened two actions: first, labeling Anthropic a supply chain risk, which would effectively prohibit Pentagon contractors from using Claude; second, invoking the Defense Production Act to compel cooperation. Anthropic reportedly has no plans to change its position. The enforcement mechanisms and any legal challenges would determine what comes next.
The standoff sets a precedent for how the U.S. government may treat AI companies that include ethical usage restrictions in government contracts. If the Pentagon succeeds in forcing Anthropic to remove its limits, it signals to the industry that usage policies are effectively unenforceable against government pressure. If Anthropic holds and legal protections apply, it establishes that AI companies retain some meaningful ability to set conditions on how their technology is used, even by the military.
The Department of Defense has formally designated Anthropic a supply-chain risk, the first US company ever to receive the label. Dario Amodei announced Anthropic will challenge the designation in court.
The Pentagon used Claude AI for target selection during Operation Epic Fury — hours after Trump signed an executive order banning Anthropic from all government systems. Timeline, evidence, and what it means.
Anthropic's CEO rejected unrestricted military AI use. Hours later, Trump ordered all federal agencies to drop Claude immediately.