Dario Amodei refused the Pentagon's deadline. Then Trump banned Anthropic from government.
Anthropic's CEO rejected unrestricted military AI use. Hours later, Trump ordered all federal agencies to drop Claude immediately.
Whether you're looking for an angel investor, a growth advisor, or just want to connect — I'm always open to great ideas.
Get in TouchAI, startups & growth insights. No spam.
TL;DR: Anthropic CEO Dario Amodei refused Defense Secretary Pete Hegseth's 5:01 PM Friday deadline to allow unrestricted military use of Claude. Hours before the deadline expired, President Trump ordered every federal agency to "immediately cease" using Anthropic's technology, with a six-month phaseout for the Pentagon. The dispute centered on two red lines Anthropic would not cross: no autonomous lethal weapons, no mass surveillance of Americans. This is the most consequential standoff between a tech company and the U.S. government since the Apple-FBI encryption fight in 2016.
This dispute did not start on February 27. It has been building for months. But the final 72 hours moved faster than anyone expected.
Tuesday, February 25. Defense Secretary Pete Hegseth met with Dario Amodei in person. The meeting produced no agreement. Later that day, the Pentagon delivered a formal ultimatum: comply by 5:01 PM ET on Friday, February 28, or face consequences. The demand was explicit. Allow Claude to be used for "all lawful purposes" by the military, without the usage restrictions Anthropic had negotiated into its original contract.
Wednesday, February 26. Amodei published his response. "We cannot in good conscience accede to their request." He stated that Anthropic's two red lines, no autonomous weapons and no mass surveillance of Americans, were non-negotiable. He also noted that the contract language the Pentagon sent overnight "made virtually no progress" on Anthropic's concerns, with supposed compromises "paired with legalese that would allow those safeguards to be disregarded at will."
Thursday, February 27. Pentagon Chief Technology Officer Emil Michael responded on X, calling Amodei a "liar" with a "God complex." Michael wrote: "He wants nothing more than to try to personally control the US Military and is ok putting our nation's safety at risk." Hours later, before the Friday deadline even arrived, President Trump signed an executive order directing all federal agencies to "immediately cease all use of Anthropic's technology." The Pentagon received a six-month phaseout window.
Three days. That is how long it took to go from a meeting to a government-wide ban.
The Pentagon's position evolved over the course of the dispute. The original contract, signed last July as part of a $200 million deal to prototype frontier AI capabilities for defense, included usage restrictions that both sides agreed to. Anthropic was one of four companies awarded contracts alongside Google, OpenAI, and xAI.
The shift came in January 2026. On January 9, Defense Secretary Hegseth sent a memo announcing the Pentagon's push toward an "AI-first warfighting force." The memo called for AI models to be used for all military purposes "free from usage policy constraints" set by individual AI companies. That was a direct contradiction of the contract terms Anthropic had signed.
The demand: remove all company-imposed restrictions on how the military uses Claude. The Pentagon wanted the same terms it got from xAI, which agreed on Monday, February 24, to let Grok be used on classified networks for "any lawful use." No guardrails. No red lines.
Pentagon CTO Emil Michael framed it in terms of democratic authority. "Congress writes bills, the president signs them, agencies write regulations, and people comply," he said to Breaking Defense. "What we're not going to do is let any one company dictate a new set of policies above and beyond what Congress has passed."
"This is not about whether the military should use AI. It's about whether a company can say 'yes, but not for these two specific things.' That should be an unremarkable position." -- @jackclarkSF
Amodei's February 26 statement was carefully worded. He did not attack the Pentagon. He did not question the legitimacy of military AI use. He drew exactly two lines and explained why.
"We cannot in good conscience accede to their request," Amodei wrote. He stated that domestic mass surveillance and fully autonomous weapons are uses that are "simply outside the bounds of what today's technology can safely and reliably do."
That framing is worth examining. Amodei did not argue that autonomous weapons are morally wrong in all circumstances. He argued that current AI technology is not reliable enough for lethal autonomous decisions. That is a technical safety argument, not a pacifist one. It leaves room to revisit the question as technology matures, while holding firm on what is safe today.
On surveillance, the argument is simpler. Mass domestic surveillance without consent or individualized suspicion raises Fourth Amendment concerns. Anthropic will not be the tool that enables bulk monitoring of Americans.
Amodei also addressed the Pentagon's overnight counter-offer directly. He said the new contract language "made virtually no progress" on Anthropic's core concerns. The supposed compromises came with clauses that would have allowed the Pentagon to override the safeguards at will. In practice, accepting those terms would have meant accepting no restrictions at all.
The statement also made clear that Anthropic had tried to find middle ground. "Every iteration of our proposed contract language would enable our models to support missile defense and similar uses," an Anthropic spokesperson confirmed. The company offered carve-outs for defensive applications. It was willing to let Claude help intercept missiles, defend against cyberattacks, and support logistics. The two things it would not do: target and kill people autonomously, or surveil American citizens en masse.
The Pentagon's response to Amodei's statement was not diplomatic.
Late on Thursday, February 27, Emil Michael posted on X calling Amodei a "liar" with a "God complex." His full statement: "He wants nothing more than to try to personally control the US Military and is ok putting our nation's safety at risk."
That language is extraordinary. A senior Pentagon official publicly calling a major tech CEO a liar is not normal communication. It signals the relationship has broken down completely, not just at the policy level, but personally.
The "God complex" framing reframes Anthropic's safety restrictions as personal arrogance rather than corporate policy. The implication: Amodei believes he knows better than the entire U.S. military. Whether that characterization is fair, it is designed to make his position politically toxic.
A CBS News report captured the Pentagon's frustration more diplomatically. A Pentagon official stated: "You have to trust your military to do the right thing." That reveals the core divide. The Pentagon believes the military should self-regulate its AI use. Anthropic believes trust-based governance is insufficient for technologies that can kill autonomously or surveil at scale.
The final escalation came from the top. About an hour before the 5:01 PM Friday deadline, President Trump signed an executive order directing all federal agencies to stop using Anthropic's technology.
Trump wrote that he was directing "EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology." The order included a six-month phaseout period for the Pentagon, acknowledging that Anthropic's technology is deeply embedded in classified military systems through its partnership with Palantir.
That Palantir detail matters. Anthropic is the only AI company whose products are actively used on classified networks, integrated through Palantir's defense analytics platform. Ripping that out requires migrating classified workflows, retraining personnel, and validating replacements for security clearance. Six months is tight.
The executive order effectively converts the Pentagon's threat into government-wide policy. Anthropic is not just losing its $200 million defense contract. It is losing access to every federal agency, civilian and military alike. The scope of the ban goes well beyond what the Pentagon dispute was about.
Anthropic was recently valued at approximately $380 billion. The $200 million Pentagon contract is not existential. But being banned from all government use is a different matter. More importantly, the "supply chain risk" implications could extend to any company that works with the government and uses Anthropic in its stack. That second-order effect is potentially far more damaging than the direct contract loss.
"The executive order is not about $200M. It's about sending a message to every AI company: cooperate fully, or be locked out of government entirely." -- @karaswisher
Before the executive order, the Pentagon had threatened two specific consequences if Anthropic refused to comply.
Supply chain risk designation. This label is normally reserved for companies considered extensions of foreign adversaries, think Huawei or Kaspersky. Being designated a supply chain risk means any company doing business with the U.S. military would need to prove it does not use anything related to Anthropic in its Pentagon work. That would effectively blacklist Anthropic from the entire defense ecosystem, including through third-party vendors.
Defense Production Act invocation. The DPA, originally passed in 1950 during the Korean War, gives the president broad authority to direct private companies to prioritize government contracts and production orders. The Pentagon suggested it could invoke the DPA to force Anthropic to provide its AI systems without restrictions.
Legal experts are skeptical the DPA would hold up. The law was designed for physical goods: steel mills, tank factories, ammunition plants. Using it to compel a software company to remove safety features would be unprecedented.
Charlie Bullock, senior research fellow at the Institute for Law & AI, stated: "If neither side backs down, it seems realistic that there would be litigation between Anthropic and the government." Legal expert Dodge was more direct, saying the DPA approach is "without precedent under the history of the DPA" and that the law "has never been used to compel a company to produce a product that it's deemed unsafe, or to dictate its terms of service."
Prior DPA invocations have looked nothing like this. Trump used it during COVID-19 to boost medical supplies. Biden used it for the 2022 baby formula shortage. None involved forcing a company to remove safety features. Trump chose the executive order route instead, accomplishing the immediate goal without the legal uncertainty of an untested DPA application.
The most misunderstood aspect of this dispute is Anthropic's actual position. The company did not refuse to work with the military. It refused two specific use cases.
Anthropic offered Claude for:
The two things Anthropic would not allow:
The gap between what Anthropic offered and what the Pentagon demanded is narrow but philosophically deep. The Pentagon insisted on no exceptions of any kind.
The positions of Anthropic's competitors became part of the story.
xAI (Elon Musk). Agreed on Monday, February 24, to allow its Grok chatbot on classified networks for "any lawful use." No restrictions. No red lines. Full compliance with the Pentagon's demands.
OpenAI. CEO Sam Altman publicly said he shares the "red lines" set by Anthropic. That is significant from Anthropic's largest competitor, suggesting OpenAI is not willing to go as far as the Pentagon wants either.
Google. Did not issue a public statement on the specific dispute, but its employees spoke loudly through the open letter.
Palantir. Caught in the middle. Its partnership with Anthropic is the mechanism through which Claude operates on classified networks. The six-month phaseout means Palantir needs a replacement AI backbone for classified workflows. The company has not publicly commented.
Anduril. Palmer Luckey's defense-focused AI company has no restrictions on military applications. It was not directly involved, but its model of building AI for defense without ethical guardrails is exactly what the Pentagon wants from all vendors.
On Thursday, February 27, a coalition of tech workers from Anthropic's competitors published an open letter supporting Amodei's position. Over 300 Google employees and more than 60 OpenAI employees signed.
The letter specifically called on executives at Google and OpenAI to maintain Anthropic's red lines against mass surveillance and fully autonomous weapons. It urged the leaders of those companies to publicly support Anthropic and refuse unilateral military use of their AI systems.
This matters because it demonstrates Anthropic's position is not an outlier within the AI industry. The people building these systems at competing companies believe the same restrictions should apply. It also creates internal pressure at Google and OpenAI. Google pulled out of Project Maven in 2018 after employee protests over drone surveillance. OpenAI faced criticism when it updated its usage policies to allow military applications in January 2024. This letter reopens those wounds.
"300+ Google employees and 60+ OpenAI employees signing an open letter supporting a competitor's CEO. That's not solidarity. That's the industry telling the government: these lines are not one company's position." -- @emilymbender
The situation as of February 27, 2026, is clear in the short term and uncertain in the long term.
Short term. Anthropic is banned from all federal government systems. The Pentagon has six months to phaseout Claude from classified networks. The $200 million defense contract is effectively dead.
Legal questions. If the government attempts to invoke the Defense Production Act or formalize the supply chain risk designation, Anthropic will almost certainly challenge it in court. Legal experts believe the government's position is weak on both fronts. The DPA was not designed for this purpose, and labeling a domestic company as a supply chain risk for refusing to remove safety features has no precedent.
Commercial impact. The direct financial hit is manageable for a company valued at $380 billion. The indirect impact is harder to measure. Some companies may avoid Anthropic out of fear of government retaliation. Others may see the stand as a sign of trustworthiness. Enterprise customers who care about AI safety may prefer a vendor that proved it would not bend under pressure.
Industry precedent. This is the most significant confrontation between an AI company and the U.S. government to date. The outcome shapes how every AI company negotiates military contracts going forward. If Anthropic suffers severe consequences, the lesson is that resistance is costly. If it emerges intact, the model of holding ethical red lines is validated.
The Apple comparison. In 2016, Apple refused the FBI's demand to build a backdoor into the iPhone used by the San Bernardino shooter. Apple held firm. The FBI found another way in. Apple's brand was strengthened by the fight. Whether Anthropic follows the same arc depends on the next six months.
The Pentagon demanded that Anthropic allow Claude to be used for "all lawful purposes" by the military, with no company-imposed restrictions. Specifically, the Pentagon wanted to remove Anthropic's prohibitions on autonomous weapons and mass domestic surveillance.
No fully autonomous lethal weapons (weapons that select and engage targets without human oversight) and no mass domestic surveillance of American citizens. Anthropic was willing to allow Claude for missile defense, cyber defense, logistics, and most other military applications.
Trump ordered every federal agency to "immediately cease" using Anthropic's technology. The Pentagon received a six-month phaseout window because Claude is embedded in classified systems through Palantir. The order goes beyond the Pentagon dispute, banning Anthropic from all government use.
The direct Pentagon contract was worth up to $200 million. The full scope of government revenue loss is broader, as the ban covers all federal agencies. Anthropic's overall valuation is approximately $380 billion, so the contract loss alone is not existential.
The Pentagon threatened to invoke the DPA, but legal experts are skeptical. The DPA has never been used to compel a company to remove safety features from a product or dictate terms of service. Legal experts like Charlie Bullock say litigation is likely if the government tries this route, and the government would probably lose.
A label normally reserved for companies tied to foreign adversaries (like Huawei or Kaspersky). If applied to Anthropic, any company working with the Pentagon would need to prove it does not use Anthropic's technology. This would effectively blacklist Anthropic from the entire defense ecosystem, including through third-party vendors.
OpenAI CEO Sam Altman publicly said he shares Anthropic's red lines. Over 300 Google employees and 60+ OpenAI employees signed an open letter supporting Amodei's position. xAI (Elon Musk's company) took the opposite approach, agreeing to unrestricted military use of Grok.
Pentagon CTO Emil Michael called Amodei a "liar" with a "God complex" on X, saying Amodei "wants nothing more than to try to personally control the US Military and is ok putting our nation's safety at risk."
The February 26 reporting covered the building pressure and Anthropic's refusal to budge. The February 27 developments represent a major escalation: Amodei's formal public refusal, the Pentagon's personal attacks, Trump's executive order banning Anthropic from all government systems, and the open letter from competitors' employees.
Both cases involve a tech company refusing a government demand to weaken product safeguards. The key difference is scale: Apple faced one law enforcement request; Anthropic faces a government-wide ban and potential Defense Production Act invocation.
The Pentagon used Claude AI for target selection during Operation Epic Fury — hours after Trump signed an executive order banning Anthropic from all government systems. Timeline, evidence, and what it means.
Sam Altman admitted the Pentagon deal was 'definitely rushed.' The key loophole: it doesn't ban collection of publicly available American data. Full breakdown.
The Pentagon warned Anthropic over its refusal to allow Claude for mass surveillance and autonomous weapons. Full breakdown.