200+ Google and OpenAI workers petition Pentagon to ban autonomous weapons
Over 200 employees at Google and OpenAI signed an open letter backing Anthropic's position on military AI. The demands, the signatories, and what it means.
Whether you're looking for an angel investor, a growth advisor, or just want to connect — I'm always open to great ideas.
Get in TouchAI, startups & growth insights. No spam.
TL;DR: More than 200 employees at Google and OpenAI signed an open letter addressed to the Pentagon demanding that the U.S. military commit to banning fully autonomous lethal weapons and mass domestic surveillance of American citizens. The signatories — many of them engineers and researchers actively building the systems in question — explicitly backed Anthropic's position after CEO Dario Amodei refused Pentagon demands and was subsequently banned from all federal government contracts. It is the most visible act of organized tech worker dissent on military AI since the Google Project Maven walkout in 2018.
The letter is short. That is deliberate. Petitions that run long get dismissed as screeds. This one does not make that mistake.
The core demands are three:
First, that Google and OpenAI executives publicly commit to the same two red lines Anthropic drew: no deployment of AI in fully autonomous lethal weapons systems, and no enabling of mass domestic surveillance of American citizens.
Second, that both companies refuse Pentagon contract language that would strip company-imposed usage restrictions, the exact terms the Defense Department was pushing on Anthropic in February 2026.
Third, that the two companies publicly support Anthropic's position rather than treat it as a competitor's problem. The letter argues that what happens to Anthropic sets the terms for what happens to everyone. If the Pentagon succeeds in forcing one company to remove safeguards, every future contract negotiation starts from that precedent.
The letter is addressed to Sundar Pichai and Sam Altman by name. That specificity is intentional. The signatories are not petitioning HR or asking for an internal task force. They are demanding public commitments from the two people who can actually make them.
"We are asking you to publicly affirm that no AI system you develop will be used to autonomously select and engage human targets, or to conduct mass surveillance of American citizens, without meaningful human oversight and judicial process."
That phrasing — "meaningful human oversight" and "judicial process" — mirrors the language used in the international campaign for a treaty on lethal autonomous weapons systems (LAWS), which over 30 countries and the International Committee of the Red Cross have endorsed. The workers are not inventing a new standard. They are demanding that their employers comply with one that already exists.
The number reported across outlets is "200+." That figure almost certainly undercounts.
Open letters on politically sensitive topics inside tech companies systematically undercount actual support. Many employees who agree do not sign because they fear retaliation, because they are on visa sponsorship and cannot afford employment risk, or because they work in business development and deal directly with the government clients who are the subject of the letter. The people who sign are the ones willing to attach their names publicly. The number who privately agree is typically a multiple of that.
What makes this letter notable is not just the count. It is the composition.
The Google contingent includes employees from Google DeepMind, the AI research division that produces Gemini and has separate but significant military-adjacent contracts. Signing this letter while employed at DeepMind is a meaningful act. These are not junior marketing employees. Several signatories have published in peer-reviewed venues on AI safety, reinforcement learning, and large language model alignment. They are the people building the systems the letter is talking about.
The OpenAI contingent is smaller in raw numbers but arguably more significant proportionally. OpenAI is a smaller company. Dozens of employees signing a letter that contradicts management's current direction — toward Pentagon partnership — represents a larger fraction of the employee base than the Google numbers suggest.
| Company | Reported Signatories | Notable Groups Represented |
|---|---|---|
| 150+ | DeepMind, Search, Cloud, Research | |
| OpenAI | 60+ | Research, Policy, Safety, Engineering |
| Total | 200+ | — |
Neither company has disputed the reported numbers.
The 2018 Project Maven controversy is the clearest precedent for what is happening now, and the signatories know it.
In 2018, Google was awarded a contract with the Pentagon's Project Maven, a program that used machine learning to analyze drone footage and identify potential targets. When word spread internally, thousands of Google employees signed a letter protesting the contract. Twelve employees resigned. The letter argued that Google should not be in the business of war, and that building weapons or surveillance tools was incompatible with the company's stated values.
Google let the Maven contract expire without renewal. It also published a set of AI Principles, including a commitment to not develop AI for weapons with a primary purpose of causing harm, and AI for surveillance that violates international norms.
That was 2018. A lot has changed.
In 2022, Google quietly updated its Cloud terms of service to allow defense and intelligence applications. In 2023, Google Cloud signed agreements with the Israeli Defense Ministry, a contract that prompted a new wave of internal protests (the "No Tech for Apartheid" campaign and the "Project Nimbus" walkout). In 2025, Google was among four companies awarded contracts by the Pentagon for frontier AI capabilities, alongside Anthropic, OpenAI, and xAI.
The 2018 AI Principles are still on Google's website. The question this letter raises is whether they still govern actual product decisions.
"Google's AI Principles say we won't build AI for weapons. Our contract portfolio says otherwise. We are asking leadership to resolve that contradiction publicly, not internally." — Attributed to a Google signatory in press reporting
The signatories are not asking Google to exit defense work entirely. They are asking for two specific carve-outs that align with what Google's own stated principles already require. The ask is narrower than 2018. The resistance from management appears to be stronger.
OpenAI was founded in 2015 as a non-profit with an explicit mission: ensure that artificial general intelligence benefits all of humanity. The framing deliberately positioned AI safety as orthogonal to national interest, something that transcended any single government's agenda.
That framing has been revised substantially.
In January 2024, OpenAI updated its usage policies to explicitly allow military applications. The change was made without a public announcement. It was discovered by journalists comparing the old and new versions of the policy page. The previous version had prohibited "weapons development" and "military and warfare" use cases. The updated version removed those prohibitions, replacing them with narrower restrictions on specific harm categories.
OpenAI's explanation at the time: the policy was overly broad and prevented beneficial partnerships, such as supporting veteran mental health programs. That is plausible as a partial explanation. It does not explain removing the weapons development restriction.
Since then, OpenAI has:
Sam Altman publicly said in February 2026, during the Anthropic-Pentagon standoff, that he shares Anthropic's red lines on autonomous weapons and surveillance. The open letter from OpenAI employees is, in part, a demand that he convert that public statement into a formal policy commitment.
The gap the letter is closing: Altman has said the right things. The company's contracts and partnerships tell a different story. The signatories want the story to match.
The specific demands in the open letter map directly onto what Anthropic refused to do.
Fully autonomous lethal weapons means systems that identify, select, and engage human targets without a human making the final decision to fire. This is distinct from AI-assisted targeting, which presents information to a human who then decides. The distinction is "human in the loop" versus "human on the loop" versus "human out of the loop." Anthropic drew the line at human out of the loop. The Pentagon wanted no line at all.
Mass domestic surveillance means bulk monitoring of American citizens without individualized suspicion or judicial authorization. Not foreign intelligence collection. Not targeted investigation of specific individuals with court approval. Mass surveillance: the kind that captures data on everyone and analyzes it later to identify suspects.
These two restrictions align with:
The workers signing this letter are not asking their employers to adopt fringe positions. They are asking their employers to comply with international consensus standards and their own founding principles.
The technical argument is equally important. Current large language models and vision systems make errors. They hallucinate. They misidentify. They fail in distribution-shifted environments. The rate of error acceptable in a customer service chatbot is not the rate of error acceptable in a system that decides whether to kill a person. The workers building these models know this better than anyone outside the room.
Neither Google nor OpenAI issued a public statement directly addressing the letter as of the article's publication date.
That non-response is its own response.
In 2018, after the Project Maven letter, Google CEO Sundar Pichai met with employees and published the AI Principles within weeks. The company did not immediately exit the contract, but leadership acknowledged the concerns publicly and committed to a framework.
This time, there has been silence at the executive level. That silence may reflect a calculation: engaging with the letter legitimizes it. Or it may reflect the changed political environment. In 2018, tech companies were not under sustained pressure from the executive branch to cooperate with military AI development. In 2026, the context is different. Publicly siding with Anthropic's position, even implicitly, carries political cost in a Washington that has made clear it views AI safety restrictions as obstruction.
There is also the competitive dynamic. Anthropic is a direct competitor to both companies. Supporting Anthropic's position is good for the principle but potentially good for the competitor. That tension is uncomfortable.
What leadership has not done: disciplined any signatories, issued a rebuttal, or updated company policy in response. The letter is, for now, unacknowledged.
Tech worker activism has a complicated record.
The wins are real. The 2018 Project Maven walkout resulted in Google exiting the contract. The 2019 Google walkout over harassment policy resulted in formal anti-retaliation protections and changes to arbitration agreements. Amazon employees' protests over Rekognition (facial recognition sold to police departments) contributed to Amazon placing a moratorium on police use of the technology in 2020.
The losses are also real. The "No Tech for Apartheid" campaign at Google did not prevent the Project Nimbus contract. Several organizers allege they were subsequently fired in actions they are challenging as retaliation. Amazon's Rekognition moratorium expired in 2023 without becoming permanent policy. IBM, Microsoft, and others continued selling facial recognition to governments despite employee objections.
The pattern: activism works best when it coincides with business risk. Project Maven became a PR problem. The harassment walkout became a liability issue. When the business interest runs the other way, when government contracts are profitable and politically necessary, worker pressure alone rarely changes outcomes.
The current petition faces that headwind. Defense contracts are increasingly central to big tech strategy. Google Cloud's government revenue grew substantially over the past three years. OpenAI's path to the revenue it needs to justify its $300+ billion valuation runs substantially through government and enterprise sales. The employees signing this letter are asking their employers to accept contract restrictions that limit a growth market.
That does not mean the letter will have no effect. Internal culture matters. The people building AI systems who believe those systems should not be used to kill people autonomously will make different decisions at the margin — in conversations with product teams, in what they are willing to implement, in what they flag to safety teams — than people who do not believe that. The letter is also a signal to regulators, to international treaty negotiations, and to the public that the people closest to this technology do not consider it safe enough for autonomous lethal use.
Three things this letter can do that an op-ed cannot.
First, it creates internal record. The signatories have now formally stated that they consider autonomous lethal weapons deployment to be contrary to their professional judgment. If either company subsequently deploys AI in autonomous weapons and something goes catastrophically wrong, this letter becomes part of the documented warning that was ignored. That is not a small thing in terms of corporate liability and the eventual public accounting of what was known and when.
Second, it gives executives political cover. Pichai and Altman can point to employee pressure as partial justification for maintaining usage restrictions in contract negotiations. "My own researchers won't work on this" is a negotiating position. Silence from employees removes that cover.
Third, it makes the position legible. The Pentagon's argument is that AI safety restrictions are one company's idiosyncratic preference. The letter makes visible that the restriction is an industry-wide professional consensus, not Dario Amodei's personal opinion. That changes the political framing from "Anthropic being difficult" to "the industry has a view."
What it probably cannot do: override economic incentives at the board and executive level. The defense market is too large and too politically important in the current environment for an open letter to shift strategy directly.
The more likely path to change is regulatory. The EU AI Act includes provisions on high-risk AI systems that will apply to autonomous weapons platforms. International treaty negotiations on LAWS are ongoing at the UN Convention on Certain Conventional Weapons. If binding international rules emerge, they will constrain all signatories' companies regardless of internal culture. The open letter is, in part, a political act directed at those arenas, not just at Sundar Pichai.
The workers signing this letter are not naive. They know how organizations work. They know the limits of internal advocacy. They are creating a public record of where the field stands, on the record, with names attached, before autonomous weapons become operational. That record will matter later, even if it does not matter immediately.
The letter asks Google's Sundar Pichai and OpenAI's Sam Altman to publicly commit to two restrictions: no AI deployment in fully autonomous lethal weapons systems (systems that select and engage targets without human oversight), and no enabling of mass domestic surveillance of American citizens without judicial authorization. It also asks both leaders to publicly support Anthropic's position in its standoff with the Pentagon.
More than 200 employees across Google and OpenAI signed as of initial reporting. The Google contingent includes employees from Google DeepMind and other AI-focused divisions. The OpenAI contingent is smaller in raw numbers but significant proportionally given the company's size. The signatories include engineers, researchers, and policy staff — people with direct involvement in building the relevant systems.
Anthropic CEO Dario Amodei refused Pentagon demands to allow Claude to be used for autonomous weapons and mass surveillance. The Pentagon responded by threatening contract termination and other consequences. President Trump subsequently ordered all federal agencies to stop using Anthropic's technology. The workers signed the letter arguing that Anthropic's position is correct and that Google and OpenAI should adopt the same red lines publicly, rather than quietly accepting different terms.
Project Maven was a 2018 Pentagon contract under which Google provided machine learning technology to analyze drone footage for targeting purposes. After thousands of employees signed a protest letter and a dozen resigned, Google let the contract expire and published AI Principles that included a commitment to not build AI for weapons with a primary purpose of causing harm. The current letter invokes that history, noting that the 2018 principles are still on Google's website but are no longer clearly reflected in its contracts.
OpenAI was founded as a non-profit in 2015. It converted to a capped-profit structure in 2019 and completed a full commercial conversion in 2025. In January 2024, OpenAI updated its usage policies to remove explicit prohibitions on military and weapons use cases, opening the door to defense contracts. The company subsequently announced a partnership with Anduril to develop AI-powered drone systems and entered government AI deployment discussions through Palantir. The open letter from employees is partly a demand that CEO Sam Altman's public statements supporting Anthropic's red lines be formalized as actual policy commitments.
OpenAI adds explicit prohibition on domestic surveillance of US persons to its Pentagon contract after Sam Altman called the original deal 'opportunistic and sloppy.'
The QuitGPT movement has grown to 1.5 million participants boycotting ChatGPT over OpenAI's Pentagon military deal. An in-person protest at OpenAI's San Francisco headquarters is planned for March 3.
Chalk messages reading 'God loves Anthropic' appeared outside OpenAI's SF HQ. The city washed them away. They came back overnight. The AI culture war hits the streets.