TL;DR: The Trump administration filed a formal court defense on March 17, arguing the Pentagon's supply chain risk designation of Anthropic is "justified and lawful." The same day, a group of former federal judges filed an amicus brief backing Anthropic's legal challenge — adding judicial weight to arguments about statutory overreach and constitutional retaliation. The first hearing is set for March 24. What happens in that courtroom will shape how much power the U.S. government can exercise over AI companies' safety policies.
What you will learn
- The March 17 court filing: what the government actually argued
- Why former federal judges took Anthropic's side
- The full timeline: how a contract dispute became a federal lawsuit
- What the supply chain risk designation means in practice
- The statutory core: 10 U.S.C. § 3252 and its limits
- Constitutional claims on appeal: First Amendment and due process
- What the Mayer Brown analysis means for contractors
- Where the rest of the AI industry stands
- What to watch at the March 24 hearing
- What a ruling in either direction would mean
- Frequently asked questions
The March 17 court filing: what the government actually argued
On March 17, 2026, the Trump administration submitted its first formal court response in the Anthropic litigation, defending the Pentagon's supply chain risk designation as both "justified and lawful."
The filing stakes out ground on three fronts.
Broad statutory discretion. The administration argues that 10 U.S.C. § 3252 gives the Secretary of Defense wide, largely unreviewable authority to determine what constitutes a supply chain risk. The government's position is that courts owe substantial deference to national security procurement decisions — and that Anthropic's disagreement with how that discretion was exercised is not a reviewable legal claim.
Democratic legitimacy. The filing restates the argument Pentagon officials previewed in February: that Congress grants the executive branch authority over defense procurement, and private companies cannot hold that process hostage by imposing contractual guardrails on military use of technology. The government frames Anthropic's usage restrictions not as legitimate commercial terms but as an unauthorized attempt by a private corporation to override defense policy.
No cognizable injury on First Amendment grounds. The administration argues that the government's procurement decisions — even those motivated in part by a vendor's public statements — do not constitute viewpoint-based retaliation cognizable under the First Amendment. The government does not owe any private company a contract, and declining to award or continue one is generally outside the First Amendment's scope.
What the filing does not do is directly address the core statutory scope question at the heart of Anthropic's challenge: whether § 3252's supply chain risk authority, designed for foreign-linked hardware vendors, legally extends to a domestic AI software company that refused to change its usage policy. That argument remains the linchpin for Anthropic — and the government's apparent decision to argue around it rather than through it is something the March 24 judge will notice.
The March 17 amicus brief filed by former federal judges is the most significant outside-party development in the case so far — and its implications extend well beyond the specific legal arguments being made.
Amicus briefs from former judges carry a different weight than briefs from advocacy organizations or industry groups. They signal that credentialed legal professionals with firsthand experience reading and applying federal statutes believe the government's position has serious doctrinal problems. Courts take that signal seriously.
According to CNN's reporting, the former judges' brief focuses on two core arguments.
Statutory overreach. The brief argues that § 3252's supply chain risk authority was written in response to a specific threat — foreign-affiliated hardware manufacturers embedding vulnerabilities in physical components of U.S. defense networks — and that extending it to a domestic AI software company over a policy disagreement represents a significant departure from the statute's text and history. The brief draws on the legislative record from the NDAA debates to argue that Congress did not intend to give the executive branch a general-purpose blacklisting tool for domestic technology disputes.
Procedural due process. The former judges argue that even if the designation were legally authorized, the compressed timeline — from ultimatum to supply chain designation in a matter of days, without notice or meaningful opportunity to respond — violated the procedural protections the statute itself requires. The government cannot invoke a national security tool and simultaneously claim that the speed of its application is unreviewable.
The brief does not take a position on the merits of Anthropic's AI safety policies. It takes a position on whether the legal mechanism used to punish Anthropic for those policies was lawfully deployed. That narrow, structural framing is exactly how amicus briefs earn judicial attention: not by advocating for a side, but by identifying a problem in the law that the court needs to address regardless of which party wins.
The full timeline: how a contract dispute became a federal lawsuit
Understanding where the litigation stands requires knowing the sequence of events that produced it.
July 2024. Anthropic signs a contract with the Pentagon's Chief Digital and Artificial Intelligence Office, worth up to $200 million. The contract includes usage restrictions: Claude will not be deployed for fully autonomous lethal weapons systems or mass surveillance of American citizens. The Pentagon agrees to these terms.
January 2026. Defense Secretary Pete Hegseth issues a memo calling for an "AI-first warfighting force" and demanding AI models be available for all military purposes "free from usage policy constraints." The memo directly contradicts the terms of the Anthropic contract.
February 24. Hegseth meets Anthropic CEO Dario Amodei in person. The Pentagon issues a formal ultimatum: remove all usage restrictions or face consequences. The deadline is 5:01 PM ET on Friday, February 28.
February 26. Amodei publishes a public refusal. He draws two explicit lines — no autonomous lethal weapons, no mass surveillance of Americans — and states the Pentagon's overnight counter-offer contained "legalese that would allow those safeguards to be disregarded at will."
February 27. President Trump signs an executive order directing every federal agency to immediately cease use of Anthropic technology, with a six-month phaseout for the Pentagon.
March 3. Defense Secretary Hegseth formally designates Anthropic a supply chain risk under 10 U.S.C. § 3252. The designation is not symbolic — it functions as a government-wide blacklist.
March 9. Anthropic files two federal lawsuits: one challenging the statutory basis for the supply chain designation, and one raising constitutional claims on First Amendment and due process grounds.
March 17. The Trump administration files its first formal court response, defending the designation. Former federal judges file an amicus brief supporting Anthropic's challenge. Reporting confirms the first hearing is set for March 24.
The pace of this litigation is unusually fast for federal court. Both parties appear motivated to reach an early ruling — Anthropic because the supply chain designation is actively damaging its commercial business, the government because a preliminary injunction would effectively reinstate Anthropic's access to federal systems while the case is litigated.
What the supply chain risk designation means in practice
The supply chain designation is not a symbolic sanction. Its operational effects are broad and, in several respects, extend far beyond Anthropic's direct government business.
The immediate effects are straightforward: Anthropic is prohibited from new DOD procurement, and existing DOD contracts are subject to termination. The six-month phaseout window in the executive order exists because Claude is integrated into classified environments through Palantir's secure infrastructure — removing it is a nontrivial engineering and security operation, not a switch that can be flipped overnight.
The more consequential effects are indirect. Under the NDAA framework governing supply chain risk designations, companies holding DOD contracts must certify that they do not use designated vendors' products for DOD-related work. That certification requirement creates a compliance surface that extends to every defense contractor, subcontractor, and government-adjacent technology vendor that uses Anthropic's API for any purpose touching federal business.
The practical outcome: companies with mixed civilian and government revenue streams often scrub designated vendors entirely rather than audit which internal workflows touch DOD work. The compliance cost of maintaining a designated vendor in a mixed-use environment frequently exceeds the cost of switching. This amplification effect is what makes the supply chain designation a much larger commercial threat than the loss of the $200 million Pentagon contract alone.
For context: Anthropic's valuation was estimated at approximately $380 billion in early 2026, supported largely by commercial enterprise growth. The enterprise segment — which includes large companies, financial institutions, law firms, and technology companies — is precisely where federal contracting exposure is most concentrated. If Fortune 500 compliance officers interpret the supply chain designation as a reason to avoid Anthropic for any government-adjacent work, the financial impact dwarfs the direct DOD contract loss.
That exposure is why the designation, now formally in effect, is creating urgency on Anthropic's side to seek a preliminary injunction ahead of the March 24 hearing.
The statutory core: 10 U.S.C. § 3252 and its limits
Anthropic's primary legal argument is a statutory one: the supply chain risk designation authority in § 3252 does not reach a domestic AI software company being penalized for a usage policy disagreement.
The statute authorizes the Secretary of Defense to exclude sources from procurement when they present a "supply chain risk." That framing was crafted in response to the Huawei problem: Chinese telecommunications hardware manufacturers whose equipment was potentially compromised at the manufacturing level, creating exploitable backdoors in U.S. defense and communications networks. The statute's legislative history, and its prior applications, are anchored in exactly that threat model.
Anthropic's argument runs in three layers.
Textual. The word "supply chain" describes a hardware-era concept — the chain of custody from component manufacture through assembly, delivery, and installation. Anthropic's large language models are not physical components. They do not sit in a supply chain in any sense that § 3252 was designed to address. The company's refusal to remove safety restrictions is a policy disagreement between a vendor and a customer, not a supply chain vulnerability.
Scope. Even if the statute reaches commercial software, its prior application has been limited to companies that present a national security threat because of foreign ownership, control, or influence. Anthropic is a U.S.-incorporated, U.S.-headquartered company with no foreign government affiliation. The designation is not based on any intelligence finding about espionage or sabotage risk. It is based on a documented policy dispute in which the government's own contract records show that Anthropic's restrictions were agreed to and then retroactively rejected.
Arbitrary and capricious. Under the Administrative Procedure Act, agency actions are subject to review for being arbitrary and capricious. Using a national security supply chain tool to punish a domestic AI company for refusing to abandon contractual terms the government itself previously accepted is precisely the kind of agency action the APA's arbitrary and capricious standard is designed to catch.
The government's March 17 filing did not directly engage the textual argument. It emphasized deference and discretion. That may prove to be a strategic choice — arguing that courts should not look too closely at the statutory scope question — but it also risks leaving the argument unanswered if the March 24 judge presses on it.
Constitutional claims on appeal: First Amendment and due process
Beyond the statutory argument, Anthropic is pressing two constitutional claims that carry implications well beyond this case.
First Amendment. Anthropic's position is that the supply chain designation is viewpoint-based retaliation: the government penalized the company specifically because of publicly stated values — that AI should not be used for autonomous lethal weapons or mass surveillance. The executive order explicitly framed the action as a response to Amodei's refusal. That factual record is unusual in government procurement cases; most First Amendment retaliation claims in contracting contexts fail for lack of a clear nexus between the speech and the government action. Here, the nexus is documented in writing.
The government's counter, detailed in the March 17 filing, is that procurement decisions fall outside the First Amendment's protections — the government can choose its vendors for any reason, including disagreement with their stated positions, without that constituting viewpoint discrimination cognizable under the First Amendment. That is a legally defensible position, but it is not bulletproof: courts have found First Amendment violations in government contracting contexts when the retaliation is sufficiently explicit and direct.
Due process. The Fifth Amendment requires notice and an opportunity to be heard before the government takes action that deprives a party of a protected property or liberty interest. Anthropic's due process argument is procedural: the designation moved from ultimatum to formal supply chain risk designation in less than a week, without the notice and response process that § 3252's own implementing regulations describe. The government cannot invoke a statutory national security tool and simultaneously sidestep the statute's own procedural requirements on urgency grounds.
The due process claim is the most immediately winnable argument. Even a court skeptical of Anthropic's statutory and First Amendment arguments might find that the compressed timeline violated procedural requirements — and a due process ruling could vacate the designation and require the Pentagon to restart the process with proper notice. That would not be a permanent win for Anthropic, but it would buy time and force the government to build a more substantial record.
What the Mayer Brown analysis means for contractors
Law firm Mayer Brown published a client advisory in the days following the March 3 designation, and its analysis has become required reading for defense contractors and technology companies with government exposure.
The advisory focuses on a question that has received less public attention than the Anthropic litigation itself: what does the supply chain designation mean for the thousands of companies that use Anthropic's API in products and services sold to the federal government?
The answer is more complicated than a simple ban. The NDAA's supply chain risk framework creates compliance obligations that cascade down contractor supply chains. But the scope of those obligations depends on how the Pentagon formalizes the designation — specifically, whether it issues a broader exclusion notice or limits the application to direct DOD procurement.
The key contractor implications from the Mayer Brown analysis:
Certification exposure. Companies pursuing or holding DOD contracts that involve information technology may need to certify that their supply chains do not include designated vendors. If the Pentagon's formal exclusion order triggers certification requirements, contractors using Anthropic's API in any product touching DOD work face compliance decisions immediately.
Audit obligations. Even if certification is not immediately required, prudent contractors will need to audit their technology stacks for Anthropic exposure. That audit is not cheap — and its existence creates a commercial incentive to remove Anthropic from supply chains proactively, before any formal certification obligation arises.
Subcontractor flow-downs. Prime contractors with DOD contracts typically include supply chain compliance clauses that flow down to subcontractors. A subcontractor using Anthropic's API in a workflow that ultimately supports a DOD prime contract may find itself in breach of its subcontract terms — even if it has no direct relationship with the DOD.
Mixed-use complexity. The hardest situation is a company using Anthropic's API for both commercial and government work on the same platform. Separating those use cases is technically possible but administratively burdensome. Many companies will choose to remove Anthropic entirely rather than maintain that separation.
The Mayer Brown analysis does not take a position on whether the designation is legally valid. It takes a position on what prudent compliance looks like given that the designation is currently in effect. For Anthropic, every enterprise customer reading that analysis and making the pragmatic choice to switch vendors is evidence of irreparable commercial harm — exactly the kind of harm that supports a preliminary injunction request.
Where the rest of the AI industry stands
Anthropic is alone in the courtroom. The broader industry response is notable for what has not happened as much as what has.
OpenAI publicly agreed with Anthropic's red lines on autonomous weapons and mass surveillance. CEO Sam Altman's solidarity statements during the February standoff were genuine — and OpenAI has not been threatened with a supply chain designation, perhaps because it has not publicly defied the Pentagon in the same way. No OpenAI brief has appeared in the Anthropic litigation.
Google remains institutionally quiet at the leadership level, despite hundreds of employee signatories on letters supporting Amodei during the February standoff. Google's exposure through DOD cloud contracts and its JEDI-era history make public intervention in this case politically difficult.
xAI represents the path not taken. Elon Musk's company agreed in February to allow Grok on classified networks for "any lawful use" with no restrictions. It is now the model the administration points to as the proper relationship between an AI company and the government it serves. xAI's compliance makes Anthropic's resistance more visible, not less.
Microsoft has significant federal exposure through Azure Government and IVAS. It has said nothing. Its silence is strategic: publicly supporting Anthropic would complicate Microsoft's own DOD relationships; publicly opposing it would alienate a portion of the AI industry it depends on for its enterprise AI stack.
Civil liberties organizations have been the most active outside parties. The Electronic Frontier Foundation has signaled interest in the statutory scope argument. The ACLU has indicated potential involvement on the First Amendment claim. Their presence in the amicus brief ecosystem, alongside former federal judges, gives the court a wider range of credentialed outside perspectives to draw on.
What to watch at the March 24 hearing
The March 24 hearing is the first substantive proceeding in the case, and three things will matter most.
Preliminary injunction status. The most consequential question the judge will address is whether Anthropic has met the threshold for a preliminary injunction — a court order pausing the supply chain designation while the merits are litigated. The standard requires showing a likelihood of success on the merits, irreparable harm if the injunction is not granted, that the balance of equities favors the plaintiff, and that the public interest supports the injunction. Anthropic's strongest case is on irreparable harm — the commercial cascade effects documented by Mayer Brown's analysis are precisely the kind of ongoing, hard-to-quantify injury that justifies preliminary relief.
Judicial engagement with the § 3252 argument. How the judge questions the government's lawyers about the statutory scope issue will reveal how seriously the court is taking the textual argument. If the judge presses on why a statute designed for foreign hardware vendors applies to a domestic AI company over a policy dispute, that signals Anthropic's best path. If the judge accepts the government's deference framing at the outset, Anthropic faces an uphill fight.
Scheduling. The hearing will also produce a case schedule — briefing deadlines, a potential preliminary injunction hearing date, and a projected timeline for merits argument. That schedule matters enormously for Anthropic's commercial situation: the longer the designation remains in full effect, the more enterprise customers make permanent migration decisions.
The March 24 hearing is not a final ruling. But it is the first real signal about how receptive this particular court is to Anthropic's arguments — and that signal will ripple through the AI industry's legal and compliance planning immediately.
What a ruling in either direction would mean
This case will take months, likely over a year, to reach a final judgment. But its trajectory is already producing effects, and the eventual ruling will carry consequences that extend well beyond Anthropic's commercial position.
If Anthropic wins on § 3252 grounds, the supply chain risk designation is vacated and the statute's scope is clarified: it reaches foreign-affiliated hardware vendors presenting covert national security threats, not domestic AI companies in policy disputes with the government. That clarification would protect every AI company that includes ethical usage restrictions in government contracts. The government would need to rely on other tools — executive orders, contract renegotiation, legislation — to compel changes to AI safety features.
If Anthropic wins on due process grounds, the designation is vacated on procedural grounds and the Pentagon must restart the designation process with proper notice. This is a narrower win — the government could potentially re-designate Anthropic after following the required procedures — but it would force a genuine process that includes an evidentiary record, giving Anthropic's lawyers more to work with in a subsequent challenge.
If Anthropic wins on First Amendment grounds, the precedent is the most sweeping: courts hold that the government cannot use regulatory power to punish an AI company for publicly stated values about how its technology should be used. That ruling would reach far beyond procurement law into the broader question of corporate speech rights in the AI era.
If the government wins, the supply chain risk tool becomes available as a general-purpose enforcement mechanism against domestic AI companies. Every AI company with federal exposure — which, at this point, means virtually every significant AI company — would face a binary choice: accept unrestricted government use of their technology or accept designation. The safety guardrail model that Anthropic, OpenAI, and Google have built into their commercial products would face existential regulatory pressure.
The institutional stakes beyond the litigation: this case is forcing Congress to confront questions it has avoided — specifically, whether existing defense procurement law adequately addresses AI companies, and whether new legislation is needed to either expand or limit government authority over AI safety features. Whatever the court rules, it will read as an invitation for legislative action.
Frequently asked questions
What did the Trump administration argue in its March 17 court filing?
The government argued the supply chain designation is "justified and lawful," emphasizing broad executive discretion over national security procurement decisions and arguing courts owe deference to those decisions. The filing also argued Anthropic has no cognizable First Amendment injury from a procurement decision, even one motivated by Anthropic's public statements.
The former judges' brief, reported by CNN on March 17, focused on statutory overreach and due process — specifically that § 3252 was not designed for domestic AI software companies in policy disputes, and that the compressed timeline from ultimatum to designation violated procedural protections. Amicus briefs from former judges carry significant weight because they signal structural legal problems rather than partisan advocacy.
What is the first hearing date and what will it cover?
The first hearing is set for March 24, 2026. It will likely address Anthropic's request for a preliminary injunction — a court order pausing the supply chain designation while the case is litigated — and will produce a case schedule for the merits phase.
What is a supply chain risk designation under 10 U.S.C. § 3252?
Section 3252 of Title 10 gives the Secretary of Defense authority to exclude companies from DOD procurement on national security grounds if they present a supply chain risk. It has historically been applied to foreign-affiliated hardware companies like Huawei and ZTE. Anthropic's designation is the first application to a domestic AI software company over a policy disagreement.
Why did Hegseth designate Anthropic a supply chain risk?
Defense Secretary Hegseth made the designation on March 3, 2026, after Anthropic refused to remove guardrails preventing Claude from being used for autonomous weapons systems and mass surveillance of American citizens. The Pentagon had demanded removal of these restrictions by February 28; Anthropic's CEO Dario Amodei publicly refused on February 26.
Is the supply chain designation currently in effect?
Yes. The designation went into effect on March 3 and has not been paused by a court order. Anthropic's preliminary injunction request, if granted at or after the March 24 hearing, would pause it while the case is litigated.
What are the two federal lawsuits Anthropic filed?
Anthropic filed two federal lawsuits on March 9, 2026: one challenging the statutory basis of the supply chain designation under § 3252 and the Administrative Procedure Act, and a second raising constitutional claims under the First Amendment (retaliation for protected speech) and the Fifth Amendment's due process clause.
How does this compare to the Huawei supply chain designation?
Huawei was designated based on documented ties to Chinese military intelligence, hardware backdoors in physical telecommunications equipment, and covert espionage risk. Anthropic is a U.S.-incorporated, U.S.-headquartered company with no foreign affiliation, designated not for any covert threat but because its CEO publicly refused to remove safety features. The prior application of § 3252 to Huawei is central to Anthropic's argument that the statute does not reach its situation.
What does the Mayer Brown analysis say about contractors?
Mayer Brown's client advisory flagged that the designation creates compliance obligations cascading through contractor supply chains — potentially requiring certification, audit, and subcontractor flow-down compliance for any company using Anthropic's API in government-adjacent work. Many contractors will remove Anthropic from their stacks proactively to avoid compliance complexity, even before any formal certification obligation arises.
What is the First Amendment argument in Anthropic's lawsuit?
Anthropic argues that the supply chain designation is viewpoint-based retaliation — the government penalized the company specifically because of its publicly stated values about AI safety. The executive order and Pentagon statements explicitly framed the action as a response to Amodei's refusal, creating an unusually clear documentary record of the nexus between the speech and the government action.
Could Anthropic get a preliminary injunction before the merits are decided?
Possibly. A preliminary injunction requires showing likelihood of success on the merits, irreparable harm, favorable balance of equities, and public interest support. Anthropic's strongest argument is irreparable harm — the commercial cascade effects from contractor compliance decisions are ongoing and difficult to quantify monetarily. The merits showing is harder but improved by the former judges' amicus brief.
What does this case mean for AI companies that include safety restrictions in government contracts?
If Anthropic wins, it establishes that § 3252 cannot be used to punish domestic AI companies for usage policy disagreements, protecting any company that builds safety restrictions into government AI contracts. If the government wins, the supply chain tool becomes available as a general-purpose enforcement mechanism, and every AI company with federal exposure would face pressure to remove ethical guardrails preemptively.
Why didn't other AI companies join the lawsuit?
OpenAI has expressed solidarity with Anthropic's position but has not been threatened with a designation — likely because it has not publicly defied the Pentagon in the same way. Google and Microsoft have significant DOD exposure that makes public legal intervention politically costly. xAI took the opposite path entirely, accepting unrestricted Pentagon access to Grok. The litigation is Anthropic's alone.
Could Congress step in?
This case is forcing Congressional attention to a gap in existing law: defense procurement statutes were not written with domestic AI companies in mind. Whatever the court rules, it will read as an invitation for legislative action — either to expand the government's authority to compel compliance with military AI requirements, or to establish legal protections for AI companies that include safety restrictions in government contracts.
What is the broader significance beyond Anthropic's business?
This is the first time a federal court will be asked to define the legal boundaries of government power over AI safety decisions. The outcome will be read by Congress, the executive branch, and every major technology company as a signal about what kind of legislation and regulation is needed to govern the relationship between AI companies and the federal government — a question that no existing statute directly addresses.
Sources: U.S. News & World Report, March 17, 2026 — CNN, March 17, 2026