TL;DR: Anthropic has filed suit against the Department of Defense, becoming the first AI company to take the U.S. government to court over AI safety guardrails. The lawsuit challenges the Pentagon's supply chain risk designation on a narrow statutory argument: 10 U.S.C. § 3252 limits that designation to companies involved in direct DOD procurement chains, and Anthropic is a commercial software vendor, not a defense hardware supplier. The outcome could define the legal boundaries of government power over AI companies for a generation.
What you will learn
- How we got here: the full timeline from deadline to courtroom
- What the supply chain risk designation actually does
- The central legal argument: 10 U.S.C. § 3252 and its limits
- Constitutional claims: First Amendment and due process
- The Huawei precedent and why this case is different
- Where other AI companies stand
- What the Pentagon is arguing back
- What the ruling could mean for AI regulation
- Frequently asked questions
How we got here
This lawsuit did not come from nowhere. It is the direct result of an escalating confrontation that started with a contract dispute and ended with a government-wide ban.
The original deal. In July 2024, Anthropic signed a contract with the Pentagon's Chief Digital and Artificial Intelligence Office worth up to $200 million. Anthropic was one of four companies — alongside Google, OpenAI, and xAI — to prototype frontier AI for defense applications. Crucially, the contract included usage-policy restrictions that both parties agreed to upfront. Anthropic would not let Claude be used for fully autonomous weapons or mass surveillance of American citizens. The Pentagon signed on those terms.
The shift in January 2026. Defense Secretary Pete Hegseth issued a memo calling for an "AI-first warfighting force" and demanding AI models be available for all military purposes "free from usage policy constraints." That was a direct contradiction of what had been negotiated.
The February ultimatum. On February 24, Hegseth met Anthropic CEO Dario Amodei in person. The meeting produced nothing. The Pentagon issued a formal deadline: comply by 5:01 PM ET on Friday, February 28, or face consequences. The demand was unrestricted military access to Claude — the same deal Elon Musk's xAI had already accepted.
Amodei's refusal. On February 26, Amodei published his response. "We cannot in good conscience accede to their request." He drew exactly two lines: no autonomous lethal weapons, no mass surveillance of Americans. He also noted that the Pentagon's overnight counter-offer contained "legalese that would allow those safeguards to be disregarded at will."
Trump's executive order. On February 27 — hours before the Friday deadline — President Trump signed an executive order directing every federal agency to "immediately cease all use of Anthropic's technology." The Pentagon received a six-month phaseout.
The supply chain risk designation. Within days of the executive order, the Pentagon formalized what it had threatened earlier: it labeled Anthropic a supply chain risk under 10 U.S.C. § 3252. That designation is what triggered the lawsuit.
The sequence matters for understanding Anthropic's legal strategy. The company is not challenging the executive order directly — that is a much harder fight on separation-of-powers terrain. It is challenging the supply chain risk designation, which rests on a specific statute with a specific scope. That is a narrower, more winnable argument.
What the supply chain risk designation does
To understand why Anthropic sued, you need to understand what the supply chain risk designation actually means in practice.
The designation is not symbolic. Under the National Defense Authorization Act, any company awarded a government contract that involves information and communications technology can be flagged as a supply chain risk if the Pentagon determines it poses a threat to national security. The usual targets are companies with ties to foreign adversaries — Huawei, ZTE, and Kaspersky are the canonical examples. The designation functions as a blacklist.
Once designated, the effects cascade:
That last row is where the real damage lives. A defense contractor that uses Anthropic's API for internal legal research, not military applications, may drop Claude entirely rather than audit which workflows touch DOD work. The designation has an amplifying effect well beyond the direct ban.
For Anthropic, a company with an estimated $380 billion valuation and a commercial enterprise business growing alongside its government work, the supply chain designation could cost significantly more than the $200 million Pentagon contract it has already lost. Enterprise customers exposed to federal contracting would face pressure to switch vendors to avoid compliance exposure.
The central legal argument
Anthropic's lawsuit rests on a precise statutory reading of 10 U.S.C. § 3252.
The law authorizes the Secretary of Defense to exclude a source from procurement if the Secretary determines there is a "supply chain risk." But the statute's scope is the key question. Section 3252 was written to govern the procurement of telecommunications and information technology hardware and software for DOD systems — meaning technology that goes directly into Defense Department infrastructure.
Anthropic's core argument: the supply chain risk authority in § 3252 is limited to companies operating in the DOD's direct procurement chain. Anthropic is not a defense hardware manufacturer. It is not building chips, routers, servers, or communications infrastructure. It is a commercial AI company that sold a software license. Applying § 3252 to exclude a commercial software company from future civilian government contracts — on the basis that it refused to remove safety features — stretches the statute beyond its plain meaning.
The argument has three layers:
1. Textual. The statute's language targets "supply chain" risk, which the Pentagon has historically applied to hardware components that could be compromised at the manufacturing level (e.g., chips with hidden backdoors). Anthropic's AI models are not physical components in any supply chain. The company's refusal to remove safety restrictions is a policy disagreement, not a supply chain vulnerability.
2. Scope. Even if the statute reaches commercial software, its application has historically been limited to companies that present a national security risk because of their foreign ownership, control, or influence. Anthropic is a U.S.-headquartered, U.S.-incorporated company with no foreign government affiliation. The Huawei designation rested on documented ties to the Chinese military and intelligence apparatus. Anthropic's only offense is a disagreement with the Pentagon about usage policy terms in a commercial contract.
3. Arbitrary and capricious. Under the Administrative Procedure Act, agency actions are reviewable for being arbitrary and capricious. Using a national security supply chain tool to punish a domestic AI company for exercising its contractual rights — especially when the Pentagon agreed to those restrictions in the original contract — arguably falls into that category.
"The government cannot invoke a national security supply chain statute to punish a company for declining to remove safety features the government itself agreed to in a signed contract. That is not what the statute was designed for, and it is not how administrative law works." — Charlie Bullock, Senior Research Fellow, Institute for Law & AI
Constitutional claims
Beyond the statutory argument, Anthropic is also pressing two constitutional claims that, while harder to win on, carry significant long-term implications.
First Amendment. Anthropic's usage policies are, in part, expressions of values — statements about what kinds of content and applications the company will and will not support. The company's brief argues that penalizing Anthropic for maintaining those policies is viewpoint-based retaliation that implicates First Amendment protections. The government is not simply declining to buy a product. It is using regulatory power to coerce a company into abandoning publicly stated positions.
This argument faces a high bar. The government generally has wide latitude in procurement decisions, and courts have been reluctant to find First Amendment violations in government contracting contexts. But in the current environment, where the executive order explicitly described the action as punishment for Amodei's refusal, the retaliation argument has more factual support than usual.
Due process. The Fifth Amendment requires that the government provide notice and an opportunity to be heard before taking action that deprives a company of property or liberty interests. Anthropic's brief argues that the supply chain risk designation was applied without the procedural safeguards the statute itself requires — specifically, that the designation was made in response to a public policy dispute, not a genuine national security investigation, and that the compressed timeline from ultimatum to designation gave Anthropic no meaningful opportunity to respond.
The due process claim is procedural rather than substantive, which makes it potentially easier to win on narrow grounds. Even if a court does not strike down the supply chain designation entirely, it could vacate the designation and require the Pentagon to restart the process with proper notice and procedure.
The Huawei precedent and why this case is different
The Pentagon's supply chain risk authority was built, in large part, around the Huawei problem. Understanding that history clarifies why Anthropic's designation is legally unusual.
Huawei, the Chinese telecommunications equipment manufacturer, was added to the Commerce Department's Entity List in May 2019. The designation followed years of documented national security concerns: congressional testimony about Huawei's ties to the People's Liberation Army, findings by intelligence agencies of exploitable backdoors in Huawei hardware, and specific incidents involving surveillance. The FCC and NDAA restrictions that followed prohibited use of Huawei equipment in networks serving U.S. government agencies.
The Huawei framework rests on three pillars that are absent in the Anthropic case:
The key distinction is that Huawei posed a supply chain risk in the literal sense: physical devices that could be compromised at the hardware level, installed in sensitive networks, and used for surveillance or sabotage. Anthropic poses no analogous technical risk. It is being designated not because its technology has hidden vulnerabilities or foreign backdoors, but because its CEO publicly refused to remove safety features.
That distinction is not just rhetorical. It is the heart of the § 3252 argument. The statute was designed for the Huawei problem. Repurposing it for the Anthropic situation requires reading the word "supply chain" to encompass something it was never intended to cover.
Where other AI companies stand
Anthropic is alone in the courtroom. But it is not alone in its position.
OpenAI has been the most explicit. CEO Sam Altman publicly stated he shares Anthropic's red lines on autonomous weapons and mass surveillance. OpenAI has not been threatened with a supply chain designation, likely because it has not been as vocal about refusal. Whether Altman's public solidarity translates to legal support for the lawsuit remains to be seen.
Google has stayed quiet at the leadership level, though over 300 of its employees signed an open letter supporting Amodei during the February standoff. Google's institutional memory of Project Maven — the 2018 contract cancellation that followed mass employee protest — makes it cautious about public statements on military AI.
xAI took the opposite path. Elon Musk's company agreed on February 24 to allow Grok on classified networks for "any lawful use." No restrictions. Full Pentagon compliance. It is now the model the administration is holding up as the alternative to Anthropic.
Microsoft has significant DOD exposure through Azure Government and its IVAS contract history. It has said nothing publicly about the Anthropic case. Its silence is notable given its commercial relationship with OpenAI.
Palantir is caught in the most uncomfortable position. Its secure cloud platform is the infrastructure through which Anthropic's Claude has operated on classified networks. The six-month phaseout window in Trump's executive order exists precisely because ripping Anthropic out of Palantir's classified environment is a non-trivial engineering and security problem. Palantir has not commented on the lawsuit.
The legal brief war has also attracted amicus interest from civil liberties organizations. The ACLU has indicated it may file a brief supporting Anthropic's First Amendment and due process arguments. The Electronic Frontier Foundation is expected to weigh in on the statutory scope question.
What the Pentagon is arguing back
The government's expected defense rests on three positions.
Broad statutory authority. The DOD will argue that § 3252 gives the Secretary of Defense wide discretion to determine what constitutes a supply chain risk, and that judicial review of national security procurement decisions is narrow. The government frequently invokes the state secrets doctrine and executive deference in procurement disputes to limit court scrutiny.
Democratic legitimacy. Pentagon CTO Emil Michael's framing — "Congress writes bills, the president signs them, agencies write regulations, and people comply" — previews the constitutional argument. The government will argue that Anthropic is improperly inserting private corporate governance into decisions that belong to the democratic process. A company cannot set terms on how the military uses technology, the argument goes, because the military's rules are set by Congress and the executive branch, not commercial vendors.
No cognizable injury. The DOD may argue that losing a government contract does not constitute a legal injury sufficient to sustain certain constitutional claims, particularly on the First Amendment count. The government is not required to contract with any particular vendor, and declining to do so — even for viewpoint-related reasons — may not meet the threshold for a First Amendment retaliation claim in the procurement context.
The strongest ground for the Pentagon is judicial deference. Courts have historically been reluctant to second-guess national security determinations, and a conservative federal judiciary is unlikely to be sympathetic to an AI company suing over military contract terms. The government does not need to win on every argument. It needs the court to find the case non-justiciable, or to apply a deferential standard that makes the statutory argument very hard to sustain.
What the ruling could mean
This case is likely to take years to fully resolve, but its implications are already being felt across the industry.
If Anthropic wins on the § 3252 argument, it establishes a clear limit on the supply chain risk designation: it cannot be used to punish domestic companies for policy disagreements. That protection would extend to every AI company that includes ethical usage restrictions in government contracts. The government would need to rely on other mechanisms — executive orders, contract renegotiation, legislative action — to compel AI companies to remove safety features.
If Anthropic loses, the supply chain risk tool becomes available as a general-purpose enforcement mechanism against AI companies. The message to the industry would be unambiguous: usage restrictions in government AI contracts are not enforceable if the government decides to push back. Every AI company with federal exposure would face pressure to remove ethical guardrails preemptively.
If the court rules on First Amendment grounds, it sets a different kind of precedent — one that reaches beyond procurement law into the broader question of whether the government can punish companies for publicly stating ethical positions. A First Amendment ruling in Anthropic's favor would be the most consequential outcome, establishing that AI companies' stated values have some degree of constitutional protection against government retaliation.
For AI regulation broadly, this case is a test of whether the existing legal framework — written before large language models existed — can handle disputes about AI safety guardrails. The court's opinion will be read by Congress, the White House, and every major technology company as a signal about what kind of legislation is needed to fill the gaps.
"This is not just a contract dispute. It is the first time a court will be asked to define the boundaries of government power over AI safety decisions. Whatever the judge rules, Congress will probably have to write a law in response." — legal commentator on the Lawfare Blog
The broader stakes are harder to quantify but may matter more. Anthropic's lawsuit puts on record, in a federal complaint, exactly what the Pentagon demanded and why Anthropic refused. That record exists regardless of how the court rules. It becomes part of the institutional history of how the U.S. government first tried to assert control over AI safety features — and how the first AI company pushed back.
This is the Apple v. FBI fight of the AI era. The outcome is uncertain. The precedent is not.
Frequently asked questions
What exactly is Anthropic suing over?
Anthropic is challenging the Pentagon's supply chain risk designation under 10 U.S.C. § 3252. Its core argument is that this statute was designed for hardware supply chain vulnerabilities from foreign-affiliated companies, not for punishing a domestic AI company over a policy disagreement. The lawsuit also raises First Amendment and due process claims.
What is 10 U.S.C. § 3252, and why does it matter?
Section 3252 of Title 10 gives the Secretary of Defense authority to exclude companies from DOD procurement on national security grounds if they pose a supply chain risk. It has historically been used against companies like Huawei and ZTE with documented ties to foreign intelligence services. Anthropic argues that applying it to a U.S. company that simply declined to remove safety features stretches the statute beyond its intended scope.
Has any AI company ever sued the U.S. government before?
Not over AI safety guardrails. This is the first lawsuit filed by an AI lab against the U.S. government over the government's attempt to compel changes to an AI system's safety features. The closest precedent is Apple's 2016 refusal to help the FBI unlock the San Bernardino shooter's iPhone, but Apple never filed suit — the government dropped the case first.
What happens to Anthropic's government business while the lawsuit proceeds?
The executive order banning Anthropic from federal systems remains in effect. The lawsuit does not automatically stay the supply chain designation. Anthropic may seek a preliminary injunction — a court order pausing the designation while the case is litigated — but winning a preliminary injunction requires showing both likely success on the merits and irreparable harm, which is a significant threshold.
How is this different from the Huawei entity list designation?
Huawei was designated based on documented ties to Chinese military intelligence, hardware backdoors, and covert espionage risk. Anthropic is a U.S. company with no foreign government affiliation, designated not for any covert threat but because its CEO publicly refused to remove safety features. The legal basis for the two designations is substantially different, which is central to Anthropic's argument.