TL;DR: On February 28, the U.S. and Israel launched Operation Epic Fury against Iran — and the Pentagon used Claude for intelligence analysis and target selection. This happened approximately 24 hours after President Trump signed an executive order banning Anthropic from every federal agency, calling the company a "Radical Left AI company." WSJ and Axios confirmed on March 2 that Claude was embedded in systems used during the strikes. Anthropic says it had no control over deployments already in the field.
What you will learn
- The timeline: ban, strikes, and revelation
- Operation Epic Fury: what we know about the strikes
- How Claude was used: intelligence, targeting, simulation
- The ban that didn't stick: executive order vs. military reality
- The Defense Production Act threat: what it means
- Anthropic's position: the red lines they drew
- OpenAI fills the gap: who replaced Anthropic
- Legal and ethical implications
- What this means for AI companies and military contracts
- Frequently asked questions
The timeline: ban, strikes, and revelation
The sequence of events between February 25 and March 2 is one of the stranger chapters in the short history of AI policy.
February 25. Defense Secretary Pete Hegseth summoned Anthropic CEO Dario Amodei to the Pentagon for what was described as an urgent meeting on AI capabilities. The substance of the meeting: the Department of Defense wanted Anthropic to strip usage restrictions from Claude — specifically, restrictions that prohibited the model from supporting autonomous lethal targeting and mass domestic surveillance. Amodei declined on the spot.
February 26. Amodei published a public statement laying out Anthropic's position. The statement was unusually direct for a technology CEO navigating a government dispute. He identified two non-negotiable red lines: Claude would not be used for autonomous weapons targeting without human decision-making in the loop, and Claude would not be used for mass surveillance of American citizens. "We cannot in good conscience accede to their request," the statement read. Pentagon CTO Emil Michael responded on X within hours, calling Amodei a "liar" with a "God complex."
February 27. President Trump signed an executive order directing every federal agency and department to "immediately cease" using Anthropic technology. The order described Anthropic as a "Radical Left AI company" and designated it a "supply chain risk" under Secretary Hegseth's review. The order specified a six-month phaseout period for any Anthropic systems already embedded in agency workflows. It also threatened to invoke the Defense Production Act to compel Anthropic's compliance — a legally contested but historically real mechanism for forcing private companies to serve national security needs.
February 28. The U.S. military and Israeli Defense Forces launched Operation Epic Fury, a coordinated series of strikes against Iranian military infrastructure. The operation involved multiple target packages, real-time intelligence fusion, and battlefield simulation. According to reporting confirmed by WSJ and Axios on March 2, Claude was actively used in the analytical infrastructure supporting the strikes — including intelligence processing and target validation workflows.
March 2. WSJ and Axios published simultaneous confirmations. Anthropic acknowledged that Claude had been deployed in defense systems prior to the executive order and that the company had no technical mechanism to remotely disable those deployments once they were in the field.
The gap between the executive order and the confirmed use of Claude in active military operations: approximately 24 hours.
Operation Epic Fury: what we know about the strikes
Operation Epic Fury was a joint U.S.-Israel military operation launched on February 28, 2026, targeting Iranian military infrastructure. The Times of Israel and CNBC were among the first to confirm the operation's existence. The Star Advertiser and American Bazaar both reported on its scope.
Public details about the operation's precise targets and outcomes remain limited by operational security constraints. What has been confirmed through multiple reporting channels:
- The operation was coordinated between U.S. Central Command and Israeli defense leadership
- Multiple target packages were involved, requiring real-time intelligence fusion across satellite, signals, and human intelligence streams
- The strikes involved both precision munitions and electronic warfare components
- Iranian military communications infrastructure was among the reported target categories
- The operation was characterized by defense officials as "time-sensitive" — meaning the targeting decisions had to be made on compressed timelines
That last point matters for understanding how AI ended up in the targeting chain. Time-sensitive targeting is exactly the use case where AI-assisted analysis is most attractive to military planners. When a target window opens and closes in minutes, the ability to rapidly synthesize intelligence, cross-reference threat data, and validate a target package against a known database — tasks that Claude can perform in seconds — becomes operationally significant.
The American Bazaar's reporting characterized the operation as "the first major U.S.-Israel joint action of the 2026 geopolitical cycle" and noted that the AI-assisted analytical layer represented a capability leap over prior joint operations, where intelligence fusion was done manually across distributed analyst teams.
How Claude was used: intelligence, targeting, simulation
The specific functions Claude performed in Operation Epic Fury have been reported across three primary domains, based on the WSJ and Axios investigations confirmed March 2.
Intelligence analysis. Defense systems embedded with Claude were used to process and synthesize large volumes of intelligence data — signals intercepts, satellite imagery metadata, prior operational reports — to produce summary assessments of Iranian military positions and capabilities. This is the category of AI military use that has the least legal controversy: it sits firmly in the "decision support" category where a human analyst reviews AI-generated summaries before any targeting decision is made.
Target selection support. This is the more contested category. Reporting describes Claude as having been used in workflows that generated target recommendations — ranked lists of potential strike objectives based on military value, collateral damage estimation, and intelligence confidence levels. The critical legal question, which has not been fully answered in public reporting, is whether human decision-makers reviewed and approved each target before it was included in a strike package, or whether the AI-generated ranking was used to accelerate approvals without meaningful human review.
Battlefield simulation. The third reported use is the most strategically significant and least reported. Claude was used to run scenario simulations — modeling Iranian military responses to various strike configurations, estimating second-order effects, and flagging potential escalation pathways. This kind of red-teaming assistance is valuable to operational planners precisely because it can enumerate non-obvious consequences faster than a human team can.
The distinction between these categories matters enormously for the legal and ethical analysis. AI-assisted intelligence analysis has existed in defense systems for decades. AI-assisted target selection, where the AI's ranking influences which humans die, sits at the heart of every international debate about autonomous weapons.
The ban that didn't stick: executive order vs. military reality
The 24-hour gap between Trump's executive order and confirmed Claude usage in active operations is not a paradox — it is the predictable result of how government software deployments actually work.
Executive orders do not reach into running software. When an agency deploys an AI system, that system runs on servers, in workflows, connected to databases that have their own operational continuity requirements. An order to "cease using" a technology does not pause a running military operation. It begins a process — procurement reviews, contract modifications, replacement sourcing — that takes weeks to months to complete, even under emergency timelines.
The executive order specified a six-month phaseout period for precisely this reason. The administration knew that removing embedded AI systems from active operational contexts could not happen overnight without creating dangerous gaps in capability. The phaseout language is the executive branch acknowledging practical reality.
But that acknowledgment creates an obvious problem: the order was framed publicly as an immediate ban of a "Radical Left AI company" — and then that company's technology was used in active military strikes approximately 24 hours later. Whether you believe the strike was justified or not, the optics are stark. The administration simultaneously declared Anthropic a supply chain risk and relied on Anthropic's technology in one of the most consequential military operations of the year.
Defense Department officials, in background conversations with reporters, have declined to characterize this as a contradiction. The official position is that systems deployed before the executive order were in legal operation during their phaseout period and that this is consistent with the text of the order.
Critics, including The Conversation's coverage of the story, read it differently: "The revelation that Claude was used in the strikes hours after the ban represents either a stunning failure of executive coordination or evidence that the ban was political theater from the start."
The Defense Production Act threat: what it means
One element of the Trump administration's response to Anthropic that has received less coverage than the executive order is the threat to invoke the Defense Production Act, or DPA.
The DPA is a Korean War-era statute that gives the executive branch broad authority to direct private companies to prioritize production of goods and services for national security needs. It has been invoked most visibly in recent memory during COVID-19, when it was used to compel manufacturers to produce ventilators and personal protective equipment. The law is real, it has teeth, and it has been sustained against legal challenge multiple times.
The threat to use the DPA against Anthropic would represent its application to a software company's usage policies — a novel and legally untested extension of the statute. The DPA has historically been used to compel production of physical goods or specific services, not to override a technology company's acceptable use policies.
Legal analysts quoted in CNBC's reporting on the standoff were divided. Some argued that Claude's target analysis capabilities fit within a broad reading of "services" covered by the DPA. Others argued that compelling a company to remove safety restrictions from an AI model goes beyond what the DPA authorizes — that the statute covers what companies produce, not how they configure what they produce.
Anthropic's legal team reportedly concluded that a DPA order along these lines would face immediate challenge and uncertain prospects in court. But the uncertainty itself is the threat. A DPA dispute could tie up Anthropic's legal resources for years and create a cloud of regulatory risk that chills enterprise contracts far more effectively than any court ruling would.
The administration chose not to formally invoke the DPA — opting instead for the executive order and the phaseout framework. Whether the DPA threat was a serious legal option or a negotiating posture remains unclear.
Anthropic's position: the red lines they drew
Dario Amodei's February 26 statement established two explicit red lines that Anthropic would not cross regardless of government pressure.
Red line one: no autonomous lethal targeting. Claude can assist with intelligence analysis, battlefield simulation, and decision support — but it cannot be used as the final decision-maker in a targeting chain where the output is a lethal strike without human review and authorization. This is consistent with existing international norms on autonomous weapons, though those norms are still evolving and not universally binding.
Red line two: no mass domestic surveillance. Claude cannot be deployed in systems designed to conduct bulk surveillance of American citizens — the kind of large-scale monitoring programs that have been debated since the NSA surveillance revelations of 2013. Amodei specifically named domestic surveillance as distinct from foreign intelligence operations.
What Anthropic did not say — and this is important — is that Claude cannot be used in military contexts at all. The company has previously supported defense-adjacent applications that involve decision support, logistics optimization, and analytical assistance. The red lines are about specific high-risk applications, not categorical military abstention.
That nuance has been lost in some coverage of the story, which has framed Anthropic's position as "refusing to work with the military." The more accurate framing is that Anthropic established specific limits — and the Pentagon wanted those limits removed. The company's position was not pacifist; it was a specific objection to specific capability unlocks.
Anthropic's public statement did not address the Operation Epic Fury situation directly. The company's position on systems deployed before the executive order — and whether those systems crossed Anthropic's own red lines in practice — has not been addressed in public communications as of March 2.
OpenAI fills the gap: who replaced Anthropic
Within hours of the Trump executive order banning Anthropic, OpenAI moved to fill the commercial gap.
OpenAI had already been in advanced discussions with the Pentagon about a separate contract, one that would make GPT-series models available on DoD classified networks. The Anthropic ban accelerated those discussions. By March 1, reporting confirmed that OpenAI had finalized a preliminary agreement to step into several of the analytical workflows that Anthropic's systems had supported.
OpenAI's military use policy is materially different from Anthropic's. The company had updated its usage policies in early 2024 to explicitly permit military and national security applications, removing prior restrictions that had prompted internal employee protests. The current policy permits what OpenAI characterizes as "lawful" military uses without the categorical restrictions that Anthropic maintains.
Sam Altman publicly characterized the OpenAI-Pentagon arrangement as "the right call for national security." He acknowledged that the optics of the deal had drawn criticism but argued that AI companies unwilling to support democratic governments were effectively ceding the field to less scrupulous actors — a version of the "if not us, someone worse" argument that has circulated in tech-defense debates since at least the Google-Project Maven controversy in 2018.
The QuitGPT boycott movement that emerged in response to the OpenAI-Pentagon deal — which has grown to over 1.5 million participants as of March 2 — represents the public reaction to Altman's position. The scale of that reaction suggests Altman's "right call" assessment is not universally shared.
Legal and ethical implications
The Operation Epic Fury situation raises legal and ethical questions that will take years to fully adjudicate — if they are adjudicated at all.
The Anthropic liability question. Can Anthropic be held legally responsible for how its technology was used in a military operation it had no control over, after it had publicly stated its objections to those uses, in the context of systems deployed before its formal termination of the government relationship? The answer under current U.S. law is almost certainly no — but "almost certainly" is doing significant work in that sentence. The legal framework governing AI developer liability for downstream use is still largely unwritten.
The executive authority question. The administration's use of an executive order to ban a specific company from government contracts — predicated on a dispute over that company's internal usage policies rather than any demonstrated security failure — is unusual. It raises questions about whether the government can use procurement access as leverage to override a private company's safety standards. The answer may depend on how courts interpret executive procurement authority, a question that has not been tested in the AI context.
The international law question. The use of AI in target selection during armed conflict is governed, in theory, by the laws of armed conflict — including the principles of distinction (targeting only combatants), proportionality (strike effects proportionate to military advantage), and precaution (all feasible steps to minimize civilian harm). Whether the target selection workflows that Claude participated in complied with these principles is a question that international law scholars are already beginning to raise in print.
The accountability gap. Perhaps the most practically significant question: if an AI system contributes to a targeting decision that later proves to have been erroneous — a wrong target, an underestimated civilian casualty count — who is accountable? The current answer is that human decision-makers are accountable, regardless of what analytical tools they used. But if AI systems are making recommendations that humans approve at scale under time pressure, the de facto accountability may be more distributed than the de jure accountability suggests.
What this means for AI companies and military contracts
Operation Epic Fury and its aftermath will accelerate a reckoning that the AI industry has been postponing for two years.
Every major AI lab is now forced to answer a question it would prefer to defer: what will you let your technology be used for, specifically, and who enforces that?
Anthropic has given its answer — two red lines, publicly stated, held under direct government pressure at significant commercial cost. The company lost $200 million in government contracts and triggered a presidential executive order to maintain those positions.
OpenAI has given its answer — lawful use is sufficient, democratic governments get access, the market decides what lawful means.
Google DeepMind is still constructing its answer, under pressure from employees who remember Project Maven, investors who want defense revenue, and a corporate parent with broad government relationships.
xAI has implicitly given its answer by pursuing unrestricted military contracts.
The differentiation between these positions will matter more after Operation Epic Fury than it did before. The story of Claude being used in Iranian strikes hours after the ban is not just a news cycle. It is evidence that the gap between an AI company's stated ethics and its technology's actual deployment can be a matter of hours. That gap should concentrate minds across the industry.
For enterprises evaluating AI procurement, the question is increasingly not just "which model performs best" but "what do we inherit when we deploy this technology, and whose ethical framework are we aligning with?"
For governments, the question is whether the current procurement framework — which treats AI models as services to be bought rather than actors whose values matter — is adequate for the decisions that AI systems are now participating in.
Neither question has a clean answer yet. But Operation Epic Fury made them impossible to defer.
Frequently asked questions
Did Anthropic know its technology would be used in Operation Epic Fury?
Anthropic says no. The company's position is that Claude was embedded in Department of Defense analytical systems prior to the executive order and that Anthropic had no knowledge of or control over those deployments once they were in the field. The company has no mechanism to remotely monitor or disable models deployed on government networks. This is consistent with how enterprise AI deployments typically work — the model is licensed and deployed by the customer, not operated by the vendor.
Does using Claude in target selection mean the strikes violated international law?
That question has not been answered and may not be answerable from public information. The laws of armed conflict require that human decision-makers exercise meaningful control over targeting decisions. Whether the human review that occurred in the Operation Epic Fury targeting chain was genuinely meaningful — or was effectively rubber-stamping AI-generated recommendations under time pressure — is not known from public reporting. International law scholars and human rights organizations are expected to pursue accountability questions as more information becomes available.
Why couldn't the executive order stop Claude from being used in the strikes?
Executive orders do not immediately halt running software systems. The order specified a six-month phaseout period precisely because removing embedded AI from active operational systems cannot be done safely overnight. The Pentagon acknowledged, through background briefings, that systems deployed before the order were in legal operation during the phaseout window. Critics argue this creates an obvious loophole: government agencies can deploy AI systems from disfavored companies and then claim they cannot be removed for operational continuity reasons.
What will happen to Anthropic's government contracts after the six-month phaseout?
All federal agency contracts are expected to terminate or transfer to competing providers over the six-month window. OpenAI, Google, and xAI are all in active discussions with relevant agencies. Some contracts may be held in legal limbo if Anthropic challenges the executive order in court — which the company has not announced plans to do as of March 2 but has not ruled out. The enterprise commercial market outside government remains open to Anthropic and is where the company's near-term revenue recovery will need to come from.
What are Anthropic's two red lines exactly?
As stated publicly by CEO Dario Amodei: Claude will not be used for autonomous lethal weapons targeting without meaningful human decision-making in the loop, and Claude will not be used for mass domestic surveillance of American citizens. These are the two specific uses Anthropic refused to enable when the Pentagon demanded removal of usage restrictions. The company has not objected categorically to military use — only to these specific high-risk applications.
Could the Defense Production Act have actually forced Anthropic to comply?
Legal opinion is divided. The DPA gives the executive branch authority to direct private companies to produce goods and services for national security. Its application to a software company's internal usage policies is legally untested. Constitutional law scholars consulted by CNBC were split on whether compelling Anthropic to remove AI safety restrictions would survive judicial review. The administration chose not to formally invoke the DPA, suggesting either that it concluded the legal case was weak or that the executive order accomplished sufficient political objectives without needing the more aggressive tool.