OpenAI's Pentagon deal has the same safety loopholes Anthropic refused to accept
Sam Altman admitted the Pentagon deal was 'definitely rushed.' The key loophole: it doesn't ban collection of publicly available American data. Full breakdown.
Whether you're looking for an angel investor, a growth advisor, or just want to connect — I'm always open to great ideas.
Get in TouchAI, startups & growth insights. No spam.
TL;DR: Three days after Trump banned Anthropic from all government systems for refusing to remove safety guardrails, OpenAI quietly signed a Pentagon deal with a document that Sam Altman himself called "definitely rushed." The contract bans collection of "unconstrained" private American data — but explicitly carves out "publicly available" information. That single phrase is doing enormous work. Here is what it means, why Anthropic drew a harder line, and what OpenAI's choice tells us about how the safety-vs-government-access debate is really going to play out.
On March 1, 2026, OpenAI released additional details about its agreement with the Pentagon. The context matters: this deal was concluded in the 72 hours immediately following Trump's executive order banning Anthropic from all federal systems. OpenAI signed while the lesson of that ban was still fresh in the industry.
The contract, according to details shared by OpenAI, includes explicit protections against certain uses. The Pentagon agreed it will not use OpenAI's models to:
Read quickly, those commitments look substantive. Read carefully, they raise immediate questions.
The key word in the surveillance provision is "unconstrained." The contract does not ban domestic surveillance. It does not ban collection of American data. It bans "unconstrained" collection of private data — while leaving open the collection of what it calls "publicly available" information.
That is the loophole. And it is not small.
"The distinction between 'publicly available' and 'private' data is doing enormous work in this contract. Those two words determine whether OpenAI's commitment to protecting Americans has real teeth." — Charlie Bullock, Institute for Law & AI
To understand why the "publicly available" carve-out is significant, you need to understand how modern surveillance actually works.
Mass surveillance in the digital era does not primarily rely on hacking private systems. It aggregates public information. Social media posts. Public court records. DMV records available through commercial data brokers. Photos tagged with location data. Forum posts. Comments sections. Purchase history shared with third-party apps. Professional profiles. News mentions.
Each piece of that information is, technically, "publicly available." None of it was shared with the expectation that it would be aggregated into a comprehensive profile by an AI system operating at scale on behalf of the U.S. military.
The legal concept here is the "mosaic theory" — the idea that combining individually innocuous pieces of public information can reveal deeply private facts about a person's life. The Supreme Court has engaged with this idea. In United States v. Jones (2012), Justice Sotomayor wrote in a concurrence: "aggregation of information... can reveal more... than any single piece of information conveyed."
An AI model trained to assist military intelligence operations and given access to "publicly available" American data is not being asked to break into anyone's private accounts. It is being asked to do what surveillance analysts have done for decades — but at a speed and scale that makes the privacy implications categorically different.
The Anthropic-Pentagon dispute made this explicit. Anthropic's red line was not limited to hacking private data. It covered mass domestic surveillance — the bulk aggregation of information about Americans without individualized suspicion, regardless of whether each individual piece was technically public. OpenAI's contract carves that aggregation back in, as long as the source data is "publicly available."
What makes the OpenAI situation genuinely unusual is that Sam Altman said the quiet part out loud.
In a statement following the release of the deal details, Altman acknowledged that the Pentagon agreement was "definitely rushed." He also said the "optics don't look good." Those are not the words of a CEO confident he got the balance right. They are the words of a CEO who knows he made a compromised deal under pressure, and chose to acknowledge it rather than pretend otherwise.
The pressure was real. Anthropic had just been banned from every federal agency in the country. The message from the Trump administration was unmistakable: companies that refuse to cooperate with military AI demands will be locked out of government contracts entirely. Anthropic's $200 million Pentagon contract was dead. Its relationships with civilian federal agencies were severed. The cost of holding firm was quantifiable and immediate.
OpenAI, which also held a $200 million CDAO contract and was simultaneously raising what would become the largest private funding round in history, had its own reasons to avoid that outcome. Signing a deal — even a rushed one — kept the door open.
But Altman's phrasing reveals the tension. Saying a deal was "definitely rushed" is an implicit acknowledgment that the details were not scrutinized as thoroughly as they should have been. Saying "the optics don't look good" is an implicit acknowledgment that reasonable people looking at those details will have legitimate concerns.
That is a different kind of statement than a CEO who believes they negotiated a principled agreement and is prepared to defend every clause.
The contrast between what Anthropic demanded and what OpenAI accepted is the core of this story. Setting them side by side makes the gap visible.
| Issue | Anthropic's red line | OpenAI's contract language |
|---|---|---|
| Autonomous weapons | Explicit prohibition on systems that "identify, select, and engage targets without human oversight" | Prohibition on "fully autonomous lethal weapons systems" — similar, but negotiated under time pressure |
| Domestic surveillance | No mass surveillance of American citizens — public or private | Bans "unconstrained" collection of private data; explicitly permits collection of "publicly available" information |
| Scope of commitment | Non-negotiable; Anthropic refused to sign under pressure | Signed under acknowledged time pressure; Altman flagged optics concerns publicly |
| Government response | Trump executive order banning Anthropic from all federal agencies | Deal accepted; government relationship preserved |
The autonomous weapons language is actually closer between the two companies than it first appears. Both contracts use "fully autonomous lethal weapons" as the prohibited category, and both retain human oversight requirements. The substantive divergence is in the surveillance provisions.
Anthropic's position was: we will not allow Claude to be used for mass domestic surveillance, full stop. That framing covers both private data collection and public data aggregation, because "mass surveillance" refers to the scope and intent of the operation, not the legal status of the source data.
OpenAI's contract uses different architecture. It permits everything except "unconstrained" collection of "private" data. The effect: public-source aggregation for intelligence purposes is not prohibited.
Dario Amodei, in his February 26 public statement, was explicit about why this distinction mattered: "domestic mass surveillance without consent or individualized suspicion raises Fourth Amendment concerns." The concern was about the scale and targeting of the operation, not whether each data point was technically private.
The intelligence community's relationship with "publicly available" data has been developing for decades. The practice has a formal name: OSINT, or Open Source Intelligence. What AI changes is the cost curve and the scale.
Before AI, OSINT required human analysts. Monitoring the social media activity of a specific target required dedicating analysts to that target. Scaling to mass monitoring of thousands or millions of people was expensive, slow, and required justifying the resource expenditure. Those friction points were imperfect but real constraints.
AI eliminates them. A model capable of processing vast quantities of unstructured text — posts, comments, forum threads, public records — can build comprehensive behavioral profiles on millions of people simultaneously, at a cost that approaches zero per additional subject.
Consider what "publicly available" data actually includes in 2026:
An AI model with access to these categories — all "publicly available" by reasonable definition — can reconstruct a person's daily movements, political views, religious practices, financial situation, relationships, and psychological state without ever touching anything legally classified as "private."
That is mass surveillance. It just uses a different input.
OpenAI's relationship with military AI has evolved rapidly, and not always in a straight line.
January 2024. OpenAI quietly updated its usage policies to remove language that had explicitly prohibited "military and warfare" applications. The change drew criticism when it became public. OpenAI argued the update was about allowing legitimate national security work, such as cybersecurity research, not about enabling weapons development.
Throughout 2024. OpenAI entered into contracts with the Pentagon's Chief Digital and AI Office as part of the same four-company deal that included Anthropic, Google, and xAI. The contracts included usage restrictions that all four companies agreed to at signing.
January 2026. Defense Secretary Hegseth sent a memo announcing the Pentagon's push toward an "AI-first warfighting force" with models available for all military purposes "free from usage policy constraints." This was the shot across the bow for all four contractors.
February 24, 2026. xAI agreed to allow Grok on classified networks for "any lawful use" — no restrictions, no red lines. The Pentagon got what it wanted from Elon Musk's company first.
February 26-27, 2026. Anthropic refused the Pentagon's demand. Trump banned Anthropic from all federal systems.
March 1, 2026. OpenAI signed its updated agreement with the language analyzed in this piece.
The trajectory is clear. The Pentagon established a sequence: get one company to fold, use that as leverage against others, escalate against those who refuse. xAI folded with no resistance. Anthropic refused and paid the price. OpenAI found a middle path — a deal with protective language that, on scrutiny, contains significant carve-outs.
The January 2024 policy update was the first signal that OpenAI was willing to move toward military use. The March 2026 deal is the formalization of that direction, conducted under the explicit pressure of watching a competitor get banned from the U.S. government.
The "publicly available" data carve-out has drawn attention from legal and policy experts who focus on the intersection of AI and civil liberties.
The ACLU framing. The American Civil Liberties Union has consistently argued that the "public" versus "private" distinction is insufficient protection against AI-enabled surveillance. In prior statements on related issues, ACLU technologists have noted that "the harm of surveillance comes from its scope and purpose, not from the technical source of the data." OpenAI's contract language does not address scope or purpose.
The mosaic theory problem. Legal scholars who work on Fourth Amendment digital surveillance issues have pointed to Carpenter v. United States (2018), in which the Supreme Court held that warrantless collection of cell site location records violated the Fourth Amendment even though individual location pings had previously been considered "public" information accessible without a warrant. The court recognized that aggregation changes the privacy calculus. OpenAI's contract does not incorporate this reasoning.
The "lawful authorization" gap. The contract's domestic surveillance clause requires "appropriate legal authorization" for certain monitoring activities. But "appropriate legal authorization" in the military context can include very broad authorities under the National Security Act, the Foreign Intelligence Surveillance Act, and Executive Order 12333, which authorizes significant surveillance activities that have never been publicly detailed. What counts as "appropriate" is determined by the executive branch — the same branch that just banned Anthropic for refusing to cooperate.
The enforcement question. Anthropic's red lines were company policy enforced through usage restrictions built into Claude's training and deployment. OpenAI's contract relies on the Pentagon's good-faith compliance with contractual commitments. These are different mechanisms. One is technical. The other is legal. Technical constraints are harder to override in the field; contractual constraints require someone to notice and enforce a violation.
Charlie Bullock of the Institute for Law & AI, who commented on the earlier Anthropic dispute, noted that the central question is "not whether the contract says the right things, but whether those words have operational meaning" — whether there is an enforcement mechanism with teeth.
OpenAI's deal sets a template. That is the most consequential part of this story.
By concluding a deal — rushed or not — OpenAI has established a proof of concept: a large frontier AI company can sign with the Pentagon without accepting zero restrictions, but with restrictions that preserve significant operational flexibility for the government. The "publicly available data" carve-out is not a Pentagon invention. It is the kind of language that gets negotiated under time pressure when both sides want a deal but one side has the structural leverage.
Every AI company now watching this situation faces a narrower negotiating corridor. The Pentagon can point to OpenAI's agreement and say: this is what a reasonable deal looks like. It prohibits fully autonomous weapons. It prohibits "unconstrained" private data collection. It includes "appropriate legal authorization" requirements for surveillance. What is your objection?
The answer, if you are Anthropic, is that those provisions contain exactly the loopholes that make them inadequate. But making that argument requires refusing the deal, and refusing the deal under the current administration means what it meant for Anthropic: a government-wide ban.
The practical menu for AI companies is now:
Full compliance (xAI model). Allow unrestricted use for "any lawful purpose." No red lines. Full government access. Maximum revenue, minimum ethics overhead.
Negotiated deal with loopholes (OpenAI model). Sign a contract with protective language that contains carve-outs sufficient for the government to operate as it wishes. Maintain the relationship. Accept the optics concerns.
Hard refusal (Anthropic model). Hold firm on specific red lines regardless of cost. Face government-wide ban. Accept revenue loss. Potentially establish legal precedent if the government overreaches in response.
Most AI companies will choose option 2. It preserves government revenue, avoids the personal attacks that Dario Amodei received from Pentagon CTO Emil Michael, and provides cover against criticism with language that sounds protective in press releases.
Whether option 2 actually protects Americans from AI-enabled surveillance depends on the Pentagon's good-faith compliance with contract terms that, as written, permit significant public-data aggregation operations. That is a significant amount of trust to extend to an institution that just banned a company for refusing to extend that same trust.
Altman publicly acknowledged that the deal was "definitely rushed" and that "the optics don't look good." This is a notable admission — most CEOs defending a controversial deal use language that projects confidence in the outcome. Altman's phrasing suggests awareness that the contract's details are vulnerable to scrutiny and that the timeline did not allow for the care the situation warranted.
The contract bans the Pentagon from using OpenAI's models to collect "unconstrained" private data on American citizens. The key term is "unconstrained" — it does not prohibit all data collection, only collection without defined limits. The contract also explicitly permits collection of "publicly available" information, which encompasses a vast range of data that AI can aggregate into detailed behavioral profiles.
Anthropic's red line was mass domestic surveillance — defined by the scope and intent of the operation, not the legal status of the source data. That position prohibits AI-assisted bulk profiling of Americans using public data. OpenAI's contract permits collection of "publicly available" information, which means the bulk profiling use case is not contractually prohibited, as long as the source data was not "private."
Modern mass surveillance operates primarily through aggregation of public-source data. Social media activity, public records, location data sold by data brokers, commercial purchase history — each piece is technically public, but aggregated at scale by an AI system, they reconstruct comprehensive private profiles. This is what legal scholars call the "mosaic effect." The Supreme Court has recognized it as a real privacy concern in Carpenter v. United States (2018). OpenAI's contract does not address it.
Yes. In January 2024, OpenAI updated its usage policies to remove language explicitly prohibiting "military and warfare" applications. Before that update, military use was categorically prohibited. After the update, it was permitted for "legitimate national security" purposes. The March 2026 Pentagon deal is the operational implementation of that directional shift, concluded under pressure following Anthropic's government-wide ban.
President Trump signed an executive order on February 27, 2026, directing every federal agency to "immediately cease" use of Anthropic's technology. The Pentagon received a six-month phaseout window because Anthropic's Claude was embedded in classified systems through Palantir. The $200 million CDAO contract was effectively terminated. Anthropic lost access to all government customers, civilian and military.
The contract relies on the Pentagon's good-faith compliance and on Anthropic's ability to identify and litigate violations. Unlike technical safety restrictions built into the model itself, contractual provisions require someone to notice a violation and have standing to enforce it. The definition of "appropriate legal authorization" for surveillance activities is also determined by the executive branch, which has broad authority under national security law to authorize surveillance activities that are not publicly disclosed.
xAI agreed to allow Grok on classified networks for "any lawful use" — no restrictions, no red lines. The Pentagon got what it wanted from xAI first, on February 24, 2026, two days before the Anthropic confrontation reached its peak. xAI's agreement serves as the anchor point: it is the deal the Pentagon wanted from everyone. OpenAI's deal sits between xAI's full compliance and Anthropic's full refusal.
This is not the end. The immediate question is resolved: OpenAI signed, Anthropic was banned, xAI complied. But the underlying policy questions — what AI companies owe the public versus the government, whether contractual language is sufficient protection against AI-enabled surveillance, and who enforces those commitments — are unresolved. Anthropic's legal challenge to the government-wide ban, if it materializes, could reopen the issue through litigation. Congressional attention to military AI oversight is growing. And the "publicly available data" loophole will receive more scrutiny as the operational implications become clearer.
Anthropic's CEO rejected unrestricted military AI use. Hours later, Trump ordered all federal agencies to drop Claude immediately.
The Department of Defense has formally designated Anthropic a supply-chain risk, the first US company ever to receive the label. Dario Amodei announced Anthropic will challenge the designation in court.
Despite Trump's ban, Claude remains in active Pentagon use with a 6-month winddown while defense tech clients flee to OpenAI and Google alternatives.