TL;DR: Two days after signing a Pentagon AI contract with a loophole that explicitly permitted collection of "publicly available" American data, OpenAI went back and amended it. The new language adds an explicit prohibition on domestic surveillance of US persons and closes the commercially acquired data loophole. Sam Altman called the original deal "opportunistic and sloppy." Civil liberties groups that had been pushing for this exact language got it. The irony writes itself: what Anthropic demanded from the Pentagon — and was banned from all federal systems for demanding — OpenAI has now voluntarily added to its own contract, after public pressure made the original terms indefensible. This is the full breakdown of what changed, why it matters, and what it tells us about the actual balance of power between AI companies and the U.S. government.
What you will learn
- What the original OpenAI Pentagon contract said — and what it missed
- Altman's exact words: "opportunistic and sloppy"
- What the amendment actually adds
- The commercially acquired data loophole, explained
- How this compares to what Anthropic demanded
- Civil liberties groups: what they pushed for, what they got
- The NSA carve-out: what "no separate contract" means in practice
- Timeline: 13 days that changed AI's relationship with the Pentagon
- What the amendment does not fix
- What every AI company can learn from this sequence
- Frequently asked questions
What the original OpenAI Pentagon contract said — and what it missed
On March 1, 2026 — three days after President Trump signed an executive order banning Anthropic from all federal systems — OpenAI concluded an updated agreement with the Pentagon. The timing was not coincidental. With Anthropic's $200 million CDAO contract effectively terminated and every federal agency ordered to cease use of Claude, the message to remaining contractors was clear: cooperate or face the same outcome.
OpenAI cooperated. The original contract included language prohibiting the Pentagon from using OpenAI's models to collect "unconstrained" private data on American citizens, and from conducting domestic surveillance without "appropriate legal authorization." It also prohibited development of fully autonomous lethal weapons systems.
The problem was in the architecture of those commitments. The contract banned collection of "unconstrained" private data. But it included an explicit carve-out for "publicly available" information. In the context of how AI-enabled surveillance actually operates, that distinction is not a protection — it is a permission slip.
Modern mass surveillance does not work by hacking private accounts. It works by aggregating public-source data at scale: social media posts, public records, location data sold by data brokers, commercial purchase history, communications metadata. Each piece is technically "public." Combined at AI speed and AI scale, they reconstruct comprehensive behavioral profiles that courts have increasingly recognized as private — regardless of the technical status of each individual input.
Sam Altman acknowledged the deal was "definitely rushed" within hours of it becoming public. The language civil liberties groups pointed to as the most problematic — the "publicly available" carve-out and the "commercially acquired data" pathway — remained in the document.
Two days later, OpenAI went back and changed it.
Altman's exact words: "opportunistic and sloppy"
The evolution of Sam Altman's public characterization of the Pentagon deal is worth tracking carefully. It moved from "definitely rushed" on March 1 to "opportunistic and sloppy" by March 3. That is not the same statement. It is a more severe indictment of the process — and, implicitly, of the people at OpenAI who negotiated it.
"Opportunistic" signals that the original deal was driven by the moment: Anthropic had just been banned, the door was open, and OpenAI moved to fill the space before the opportunity closed. That is a commercially rational response to a competitor's setback. It is not, as Altman is implicitly acknowledging, a principled approach to drafting a contract that will govern how the U.S. military uses AI to collect data on American citizens.
"Sloppy" is even harder to spin. A CEO calling a deal "sloppy" is saying that it was not carefully constructed — that details that should have been caught were not caught, that language that should have been tightened was not tightened. In a contract whose core function is to define what the U.S. military can and cannot do with OpenAI's models vis-a-vis domestic surveillance, "sloppy" drafting has consequences that extend well beyond reputational optics.
Altman did not identify who was responsible for the sloppiness. He did not describe the internal review process that produced the original language. He did not specify whether the commercially acquired data loophole was an oversight or a deliberate concession to the Pentagon's negotiating demands. Those are questions that remain open.
What the public statement did was signal that OpenAI recognized the original contract's language was not defensible — and chose to acknowledge that directly rather than argue that critics had misread it. Given that critics had read it correctly, that was probably the right call.
What the amendment actually adds
The March 3 amendment to OpenAI's Pentagon contract added two substantive provisions that the original document lacked.
First: an explicit prohibition on domestic surveillance of US persons. The original contract prohibited "unconstrained" collection of private data and required "appropriate legal authorization" for surveillance activities. Neither of those formulations prohibited domestic surveillance as a category. The amendment adds language that does. The Pentagon is now contractually prohibited from using OpenAI's models for domestic surveillance of US persons — not just "unconstrained" private data collection, not just unauthorized surveillance, but the operational category of domestic surveillance itself.
Second: closure of the commercially acquired data loophole. The original contract's "publicly available" carve-out created a pathway for the Pentagon to aggregate data purchased from commercial data brokers — location data from app developers, purchase history from loyalty programs, consumer behavioral data sold through the commercial data ecosystem — without violating the contract's stated privacy protections. The amendment explicitly closes this pathway. Commercially acquired data on US persons is now treated the same as privately collected data: it cannot be used for surveillance purposes under the contract.
The Pentagon confirmed both provisions on March 3 and added a further clarification: OpenAI's models will not serve the NSA under this contract. If the NSA wants access to OpenAI technology, it requires a separate, independent contract negotiation — with its own terms and its own public scrutiny.
That last point is significant. The NSA is the U.S. intelligence community's primary signals intelligence agency. Its surveillance authorities — particularly under Executive Order 12333 — are among the broadest in the federal government. Explicitly scoping the Pentagon contract to exclude NSA use closes what would otherwise have been the most consequential operational vector for the surveillance concerns that civil liberties groups raised.
The commercially acquired data loophole, explained
To appreciate why closing the commercially acquired data loophole matters, you need to understand what the commercial data ecosystem actually contains in 2026.
The United States does not have a comprehensive federal data privacy law. In its absence, a large commercial market has developed in which personal behavioral data is collected, packaged, and sold. App developers monetize location data. Retail loyalty programs sell purchase history. Financial data aggregators package transaction records. Social media platforms license behavioral signals to advertisers — and through them, to data brokers who resell to any buyer.
The Pentagon is a buyer. The intelligence community has been purchasing commercially available data on Americans for years, in transactions that do not require warrants because the data is technically being purchased from commercial vendors rather than collected directly from individuals. In 2021, the Privacy and Civil Liberties Oversight Board released a report noting that the U.S. government's purchase of commercial data "raises serious concerns" about Fourth Amendment compliance. The practice has not stopped.
The original OpenAI contract's "publicly available" language would have placed commercially acquired data outside the surveillance prohibition. An AI model could have been used to aggregate commercially purchased location records, purchase histories, and behavioral profiles on American citizens without technically violating the contract's terms — because each piece of data was "publicly available" in the sense that it was sold on a commercial market.
The amendment explicitly reclassifies this data. "Commercially acquired" data on US persons is now included in the prohibition. This matters because it addresses how surveillance actually happens: not through hacking, but through commercial purchase and AI-assisted aggregation.
The legal basis for this reclassification has a recent Supreme Court precedent behind it. In Carpenter v. United States (2018), the court held that government collection of cell site location records violated the Fourth Amendment even though each individual location ping was technically available to the cell carrier — a third party — without a warrant. The court's reasoning: aggregation of location data over time reveals private facts about a person's life that they have not voluntarily shared in any meaningful sense, regardless of the technical status of the underlying records.
The amendment's treatment of commercially acquired data reflects this reasoning without explicitly citing it.
How this compares to what Anthropic demanded
The comparison between OpenAI's amended contract and Anthropic's stated red lines is the most important table in this story.
What this table shows: OpenAI's amended contract now covers three of the four substantive issues that Anthropic identified as red lines when Anthropic refused the Pentagon's demands. The categories of prohibition are now structurally similar. The primary remaining difference is enforcement: Anthropic's red lines were implemented through usage restrictions built into Claude's deployment, while OpenAI's contract relies on the Pentagon's contractual compliance and OpenAI's ability to identify and respond to violations.
That enforcement gap matters. A contractual prohibition depends on someone noticing a violation and having the leverage to enforce it. A technical restriction enforces itself. OpenAI has not disclosed whether any technical usage restrictions accompany the contractual language, or whether the contract alone is the operative constraint.
The irony that civil liberties advocates have noted publicly: the substantive protections that Anthropic demanded — and was banned for demanding — are now in OpenAI's contract. Dario Amodei held firm and lost every government contract. Sam Altman signed a deal he later called sloppy, amended it under public pressure, and now has a contract that looks, on paper, very similar to what Anthropic was asking for. The market outcomes are inverted from the ethical ones.
Civil liberties groups: what they pushed for, what they got
Between March 1 and March 3, civil liberties organizations issued coordinated public statements identifying the specific language in the original OpenAI contract that they considered inadequate. That public pressure — combined with what appears to have been internal recognition at OpenAI that the original language was indefensible — contributed to the amendment.
The ACLU's Technology and National Security team published an analysis arguing that the "publicly available" carve-out "effectively permits the Pentagon to use OpenAI's AI to conduct mass surveillance of Americans through commercial data aggregation — the primary surveillance vector for intelligence agencies in the post-Snowden era." The statement called for explicit categorical prohibition on domestic surveillance regardless of data source.
The Electronic Frontier Foundation separately noted that the "commercially acquired data" pathway was "the exact loophole through which the intelligence community has conducted warrantless surveillance of American citizens for years." The EFF called for explicit closure of that pathway.
The Brennan Center for Justice focused on the "appropriate legal authorization" language, noting that under Executive Order 12333, NSA surveillance activities that have never been publicly disclosed can be considered "legally authorized" under that standard — effectively making the authorization requirement a nullity for intelligence community purposes.
All three of these concerns are addressed, at least on paper, by the March 3 amendment. The domestic surveillance prohibition covers the first concern. The commercially acquired data closure covers the second. The NSA scope exclusion substantially addresses the third — by removing the NSA from the contract's scope entirely, rather than relying on the "appropriate legal authorization" standard that EO 12333 makes toothless.
Whether the organizations consider the amendment sufficient will depend on their assessment of enforcement mechanisms — the one gap the amendment does not directly address.
The NSA carve-out: what "no separate contract" means in practice
The Pentagon's confirmation that OpenAI's models will not serve the NSA under the current contract — and that NSA access would require a separate negotiation — deserves more attention than it has received.
The NSA's surveillance authorities are qualitatively different from the Pentagon's. The Department of Defense conducts foreign military intelligence. The NSA conducts signals intelligence under authorities that include bulk collection programs that were not publicly disclosed until Edward Snowden's 2013 revelations. Under Executive Order 12333, the NSA has broad authority to collect communications of non-US persons outside the United States — and significant latitude regarding incidentally collected US person data.
An AI contract with the Pentagon that did not explicitly exclude NSA use would have created a pathway for the NSA to access OpenAI's models through the existing Pentagon relationship, without the public scrutiny of a separate contracting process. The amendment closes that pathway.
This is significant for two reasons. First, it means any NSA use of OpenAI technology would require its own contract, with its own terms, and its own public announcement — creating at least the possibility of separate scrutiny. Second, it signals that OpenAI is drawing a distinction between military AI applications (helping the Pentagon with logistics, analysis, and operations) and intelligence community AI applications (helping the NSA conduct surveillance) — and treating the latter as categorically requiring a higher bar.
Whether that distinction holds when the NSA does approach OpenAI for a contract — and it will — depends on whether OpenAI treats the March 3 amendment as a statement of principle or as the minimum required to survive the current news cycle.
Timeline: 13 days that changed AI's relationship with the Pentagon
The sequence of events from February 19 to March 3, 2026, moved faster than most policy watchers anticipated.
Thirteen days from Hegseth's memo to a substantively amended contract. The speed reflects several dynamics simultaneously: the Trump administration's aggressive posture, Anthropic's willingness to absorb the cost of refusal, OpenAI's initial choice to prioritize the government relationship over the contract details, and then the speed with which public and civil society pressure made the original language untenable.
The most instructive data point in the timeline: the gap between "definitely rushed" (March 1) and "opportunistic and sloppy" (March 3) is exactly two days. That is how long it took for the public analysis of the original contract's loopholes to become specific enough that Altman could no longer describe the deal as merely rushed. The language had been dissected. The loopholes had been named. The amendment was the response.
What the amendment does not fix
The March 3 amendment is substantively better than the original contract. It is not a complete answer to the concerns that critics raised.
Enforcement remains contractual, not technical. The prohibitions in the amended contract depend on the Pentagon's good-faith compliance and OpenAI's ability to detect and respond to violations. OpenAI has not disclosed whether any technical restrictions on OpenAI model behavior accompany the contractual language. Anthropic's red lines included technical usage restrictions built into Claude's deployment — constraints that enforce themselves regardless of the government's compliance posture. Whether OpenAI's contract-only approach provides equivalent protection is a question the amendment does not answer.
The "appropriate legal authorization" standard still applies to other activities. The amendment explicitly prohibits domestic surveillance of US persons. But the original contract's "appropriate legal authorization" language remains for other surveillance-adjacent activities. What counts as "appropriate" is still determined by executive branch legal standards — including the broad authorities under the National Security Act and FISA that the executive branch interprets largely without public disclosure.
The amendment covers OpenAI's models, not OpenAI's data. The prohibitions constrain what the Pentagon can do with OpenAI's AI systems. They do not address what happens to the data that flows through those systems during legitimate military use — how it is stored, who can access it, and whether it can be repurposed for intelligence activities outside the contract's stated scope.
Verification is asymmetric. OpenAI can verify that the Pentagon signed the amended contract. It cannot independently verify how the Pentagon is using OpenAI's models in classified operations. The company's ability to monitor compliance with its own contract is limited by the classified nature of many Pentagon AI deployments.
These gaps do not negate the amendment's value. They describe its limits. Civil liberties advocates who called the amendment a positive step have generally acknowledged that it does not resolve every concern — and that enforcement mechanisms remain the key unanswered question.
What every AI company can learn from this sequence
The OpenAI Pentagon sequence — original deal, criticism, amendment — contains several lessons that will apply to every AI company navigating similar pressures over the next 18 months.
Public language is a first draft. The original contract was released publicly. That means it was immediately available for analysis by lawyers, civil liberties advocates, and journalists who specialize in surveillance law. Vague or ambiguous language in a high-profile government AI contract is not a durable negotiating position. It is a first draft that will be revised under pressure. If you negotiate in good faith and get the language right initially, you avoid the reputational cost of the revision process. If you negotiate under time pressure and get it wrong, you will revise it publicly, at cost.
The pressure sequence is predictable. Hegseth's memo established a sequence: maximum demands, company capitulation or refusal, consequences for refusal. The lesson from the OpenAI experience is that the sequence does not end at signing. There is a second-order sequence: initial contract, public analysis, civil society pressure, amendment. Companies that anticipate the second-order sequence can negotiate stronger initial language rather than waiting for the amendment round.
Altman's admission was strategically correct, even if tactically uncomfortable. Calling the original deal "opportunistic and sloppy" was a real-time acknowledgment that the amendment was necessary and that the original language was inadequate. That framing — honest self-criticism followed by correction — is more durable than defending language that has been specifically dissected by experts. The reputational cost of "we got it wrong and fixed it" is lower than the cost of "we defend a contract that permits domestic surveillance of Americans."
What Anthropic was punished for demanding, OpenAI now voluntarily provides. This is the central irony. The substantive protection that Dario Amodei held firm on — and that cost Anthropic every government contract — is now in OpenAI's amended contract. The market rewarded OpenAI's initial capitulation. The civil society and reputational pressure then forced the amendment. The net result is that the protections exist in OpenAI's contract, Anthropic is still banned from federal systems, and the incentive structure for future companies is deeply ambiguous: hold firm and get banned, or sign and amend under pressure, and end up in roughly the same place with your government relationship intact.
The NSA distinction may be the most durable precedent. By explicitly scoping the Pentagon contract to exclude NSA use — and confirming that NSA access requires a separate negotiation — OpenAI has established a precedent that the surveillance authorities of intelligence agencies require separate, higher-scrutiny contracting. That distinction, if maintained, creates a meaningful structural barrier between military AI use and intelligence community AI use that did not exist before March 3.
Whether OpenAI maintains that distinction when the NSA comes calling with its own proposal is the next chapter of this story.
Frequently asked questions
What exactly did OpenAI change in the amended contract?
The March 3 amendment added two primary provisions. First, an explicit categorical prohibition on domestic surveillance of US persons — not just "unconstrained" private data collection, but domestic surveillance as an operational category. Second, closure of the commercially acquired data loophole: data purchased from commercial data brokers is now treated the same as directly collected private data and cannot be used for surveillance purposes. The Pentagon also confirmed that NSA access to OpenAI's models requires a separate, independent contract.
Why did Sam Altman call the deal "opportunistic and sloppy"?
Altman's language evolved between March 1 ("definitely rushed") and March 3 ("opportunistic and sloppy"). "Opportunistic" refers to the timing — OpenAI moved quickly to fill the space left by Anthropic's government-wide ban, prioritizing the opportunity over thorough contract drafting. "Sloppy" is a more direct acknowledgment that the specific language was not carefully constructed. Altman has not publicly identified who was responsible for the drafting or whether the loopholes were oversights or deliberate concessions to Pentagon negotiating demands.
How does the amended contract compare to what Anthropic demanded?
On the core surveillance issues, the amended contract now covers the same categories Anthropic identified as red lines: domestic surveillance prohibition, public-source data aggregation for surveillance, commercially acquired data, and NSA access. The primary remaining difference is enforcement mechanism: Anthropic's red lines were implemented through technical usage restrictions built into Claude's deployment, while OpenAI's contract relies on contractual compliance. OpenAI has not disclosed whether technical restrictions accompany the contractual language.
What does the NSA exclusion actually mean?
The Pentagon confirmed that OpenAI's current contract does not cover NSA use of OpenAI's models. If the NSA wants access to OpenAI technology, it must negotiate a separate, independent contract. This matters because the NSA's surveillance authorities — particularly under Executive Order 12333 — are qualitatively broader than standard Pentagon authorities. By removing the NSA from the existing contract's scope, OpenAI has ensured that any NSA use of its technology would require its own public announcement and contracting process, rather than flowing through the existing Pentagon relationship.
Did civil liberties groups consider the amendment sufficient?
Groups including the ACLU, EFF, and Brennan Center have acknowledged the amendment as a substantive improvement over the original contract. Their remaining concerns center on enforcement: the amended prohibitions are contractual rather than technical, which means compliance depends on the Pentagon's good-faith adherence and OpenAI's ability to detect violations in classified deployments. Most organizations called the amendment a necessary step but not a complete resolution of their concerns.
What happened to Anthropic while OpenAI was amending its contract?
Anthropic remains banned from all federal agencies under the Trump executive order signed February 27. The Pentagon has a six-month phaseout window for Anthropic's Claude in classified systems because it was embedded through Palantir's platform. During the same period in which OpenAI amended its contract to include the protections Anthropic demanded, Anthropic's government ban remained in effect. The market consequences of OpenAI's approach versus Anthropic's have not converged: OpenAI has a government contract with stronger language; Anthropic has no government contract at all.
What is the timeline for the original deal and the amendment?
The original deal was signed March 1, 2026, three days after the Anthropic government-wide ban. Civil liberties groups published analyses identifying specific loopholes on March 1-2. OpenAI amended the contract and Sam Altman publicly called the original deal "opportunistic and sloppy" on March 3. The full sequence — from Hegseth's initial memo to the amended contract — spanned 13 days.