TL;DR: When Anthropic refused the Pentagon's demand for unrestricted military AI use and OpenAI signed on the dotted line, San Franciscans did what San Franciscans do: they grabbed chalk and took to the streets. Messages praising Anthropic and questioning OpenAI appeared on sidewalks outside both companies' HQs. The city power-washed them away. They came back overnight. This is what an AI culture war looks like when it leaves the group chats.
What you will learn
- The chalk messages and what they actually said
- Why Anthropic's HQ got praised and OpenAI's got circled in red
- The Pentagon backstory: what each company did and why it matters
- San Francisco as the epicenter of the AI culture war
- Social media reaction and why this became a meme
- What sidewalk chalk tells us about AI ethics sentiment
- The longer story: when values become brand identity
- FAQ
The chalk messages
On the morning of Friday, February 28, 2026, people walking past Anthropic's downtown San Francisco headquarters at Howard and First streets found something unusual on the sidewalk. Written in chalk, in large careful letters: messages defending the company's "courage." By afternoon, the city had washed them away.
By evening, new messages had appeared outside OpenAI's Mission Bay headquarters on Third Street. This time the tone was different.
"OpenAI Should Lead, Orwell Warned Us"
"Show the contract"
"Take a stand for civil liberty"
"Don't help the government spy on Americans"
"Do no evil"
"History is watching"
Someone had drawn a red chalk circle around the building. City workers came. City workers left. By the next morning, new chalk had appeared. This went on through the night.
The whole thing lasted about 36 hours. No arrests. No confrontations. Just chalk, water, and repetition. In a city that has hosted anti-Vietnam protests, union marches, and the original Summer of Love, sidewalk chalk messages about AI ethics will not make the history books. But they will make the culture section, and sometimes that matters just as much.
Why Anthropic got praised and OpenAI got circled in red
The chalk did not appear from nowhere. It was a direct response to a specific sequence of events that played out over roughly 72 hours at the end of February 2026.
Thursday, February 27. Anthropic CEO Dario Amodei published a formal refusal to comply with Pentagon demands. Defense Secretary Pete Hegseth had given Anthropic until 5:01 PM Friday to remove its usage restrictions — specifically the prohibitions on autonomous weapons and mass domestic surveillance — or face a government ban. Amodei's response: "We cannot in good conscience accede to their request."
Friday, February 28, 5:01 PM. The deadline passed. Anthropic held its position. President Trump signed an executive order banning all federal agencies from using Anthropic's technology.
Also Friday, February 28, evening: OpenAI CEO Sam Altman secured a Pentagon deal. He stated the company had agreed on red lines against surveillance and autonomous weapons — the same red lines Anthropic had died on the hill for — but critics were quick to note the difference between promising red lines in a press statement and having them enforced in a contract.
To people watching this closely, the sequence was damning. Anthropic was punished for saying no. OpenAI negotiated its way to a yes. The chalk outside Anthropic's office was a standing ovation. The chalk outside OpenAI's office was a question.
The Pentagon backstory
To understand why anyone in San Francisco cared enough to go buy chalk, you need to understand what the Pentagon actually demanded.
The dispute started with a January 2026 memo from Defense Secretary Pete Hegseth calling for an "AI-first warfighting force." The memo stated that AI models should be used for military purposes "free from usage policy constraints" set by AI companies. In plain English: the Pentagon wanted to use AI with no guardrails.
Anthropic's existing contract — worth up to $200 million, signed in July 2025 — included two hard restrictions:
- No fully autonomous lethal weapons. Systems that identify, select, and engage targets without a human in the decision loop.
- No mass domestic surveillance. Bulk monitoring of American citizens without individualized suspicion.
Hegseth set a formal deadline: comply by 5:01 PM on Friday, February 28, or be designated a "supply chain risk" and banned from government systems. The time specificity was itself a provocation. Not 5 PM. Not end of business. 5:01. It was designed to feel like a countdown.
Anthropic refused.
Hours before the deadline, Trump escalated the situation further, ordering every federal agency to cease using Anthropic immediately and giving the Pentagon six months to phase out Claude from classified military networks.
Then Sam Altman made his call. By Friday evening, OpenAI had a Pentagon deal. The contrast was immediate and stark. Anthropic took the punishment. OpenAI took the contract.
That contrast is what the chalk was about.
San Francisco as the epicenter of the AI culture war
San Francisco has always been a city that turns ideas into theater. The tech industry's cultural contradictions — its rhetoric of democratization alongside consolidating power, its claims of benevolence alongside its military contracts — have been playing out on SF streets for a decade.
The AI boom accelerated all of it. Mission Bay, where OpenAI's headquarters sits, was not long ago a district of warehouses and parking lots. Now it is the physical address of one of the most powerful and contested technology companies in the world. The same neighborhood where workers once organized for labor rights now hosts companies whose foundational question is whether their products should be allowed to kill people autonomously.
That tension was always going to leak onto the sidewalk eventually.
San Francisco's tech workforce skews toward people with strong opinions about AI safety. Anthropic was founded in 2021 by former OpenAI researchers — including Dario Amodei and his sister Daniela Amodei — who left specifically because they believed OpenAI was moving too fast without adequate safety commitments. Many of those researchers still live in the city. So do thousands of people who follow this world closely.
The AI safety community is not fringe in San Francisco. It is a significant cultural constituency. When Anthropic's CEO publicly said "we cannot in good conscience" cross two specific lines and lost hundreds of millions of dollars as a result, a sizable portion of the city experienced that as a moment of genuine moral clarity in an industry not known for it.
The chalk was a small act of recognition. You did the thing you said you would do. We noticed.
The chalk story spread fast, but not through traditional news channels. It moved through X, Bluesky, and AI-adjacent Discord servers first.
The initial virality was visual. Someone photographed the red circle around OpenAI's headquarters. That image, combined with the phrase "God loves Anthropic" (which appeared in some of the chalk messages at Anthropic's Howard Street location), created an immediately shareable contrast. The photos looked like protest art. They also looked like something you would create if you were deliberately trying to go viral.
Whether the chalking was organic or coordinated is unclear. Mission Local, which first reported the story, described the messages as appearing overnight and reappearing after city cleanups. That pattern suggests either a group of people working in shifts, or multiple independent individuals who had the same idea.
The Orwell angle landed. "OpenAI Should Lead, Orwell Warned Us" was the line that got the most attention online. It is blunt to the point of being slightly clunky, but it worked as a social object because it condenses the entire concern into a single phrase. George Orwell wrote about surveillance states and the language used to justify them. The message was not subtle. It did not need to be.
"Do no evil" carried its own weight. That phrasing is historically Google's — or rather, it is the famous version of what was actually "Don't be evil," Google's original motto. Using it outside OpenAI's headquarters implies a comparison that does not need to be spelled out. You know what Google's motto became, and you know how that went.
Sam Altman's trust comment became the hook. In the same period that Altman was negotiating the Pentagon deal, he made a comment to CNBC that he "mostly trusted" Anthropic as a company. The word "mostly" became a meme instantly. Social media users noted that you do not qualify trust about safety-critical things with "mostly" unless you are hedging for a reason. The chalk protesters had a version of this in "Show the contract" — a demand that OpenAI make its Pentagon agreement public so the red lines could be verified, not just announced.
What sidewalk chalk tells us about AI ethics sentiment
There is a tendency, when writing about AI policy disputes, to stay in the register of contracts, congressional testimony, and think tank white papers. The chalk story is a useful corrective because it shows what happens when those disputes reach people who care but are not policy professionals.
People are paying attention. The level of detail in the chalk messages — "Show the contract," the explicit reference to domestic surveillance — suggests people who are tracking the specifics, not just expressing vague unease. This is not the usual "AI is scary" protest sentiment. This is people who know the terms of Anthropic's usage policy and have opinions about it.
The cultural framing has shifted. Two years ago, the AI debate in San Francisco was primarily about jobs and displacement. The chalk protest does not mention jobs. It mentions surveillance, civil liberty, and Orwell. The public conversation has moved to questions about power and accountability.
Companies are being treated as moral actors. The protesters drew a distinction between Anthropic and OpenAI that is not about product quality or pricing. It is about what each company was willing to do when it cost them something. That is a value judgment, not a consumer preference. Treating a tech company as a moral actor — praising it or condemning it on ethical grounds — used to be niche. It is less niche now.
The impermanence is the point. Sidewalk chalk washes away. The protesters knew this and kept coming back. There is something in that repetition that is worth noting. The gesture is not meant to be permanent. It is meant to show that someone cares enough to do it again. That is different from signing a petition or posting a tweet. It requires you to be somewhere specific, at a specific time, doing something physical in the rain-adjacent climate of late-February San Francisco.
The city washed it away. It came back. In a 36-hour window during which one AI company took a principled stand and another signed a government contract, someone made sure those two things were not invisible.
The longer story: when values become brand identity
The chalk story is amusing. It is also the visible edge of something larger.
Anthropic has spent five years building a brand identity that is inseparable from its stated safety commitments. Its Responsible Scaling Policy, its Constitutional AI approach, its hiring of AI safety researchers — all of it adds up to a company that has made "we will not cross certain lines" central to its identity. When the Pentagon called that bluff and Anthropic did not blink, it turned a brand claim into a demonstrated fact.
That has commercial implications. Enterprise customers in healthcare, finance, and legal services increasingly care about the governance posture of the AI companies they work with. Being the company that proved it would accept a government ban rather than enable mass surveillance is a selling point in those markets. It is also a risk factor in others. Companies that need to work with federal agencies will now weigh whether using Anthropic's products creates complications.
It also has cultural implications. The AI industry's reputation in San Francisco has been complicated at best. The original wave of tech utopianism curdled for a lot of people through the 2010s. AI's rapid rise has produced both genuine enthusiasm and genuine fear in the city that hosts more of it than anywhere else. The chalk protests are a small data point, but they suggest that at least some people in that city want to believe that safety commitments are real and not just marketing.
OpenAI's situation is more complex than the chalk story makes it look. Altman stated publicly that OpenAI's deal with the Pentagon included explicit red lines against domestic surveillance and autonomous weapons — the same red lines Anthropic drew. The difference is that Anthropic's red lines were tested under extreme pressure and held. OpenAI's were announced in a favorable negotiating environment. Whether that distinction matters will depend on what the contract actually says, which is why "Show the contract" was the sharpest thing written on those sidewalks.
The deeper question is whether AI ethics can survive contact with the money that now flows into this industry. Anthropic raised $7.3 billion in 2024. OpenAI closed a $40 billion round in early 2025. At those valuations, the financial incentives to accommodate powerful clients are enormous. The Pentagon dispute showed one company saying no at real cost. The chalk showed people noticing.
That might be all it was. A few days of sidewalk art that the rain will eventually take care of permanently. Or it might be an early indicator that the AI industry's ethical commitments are going to be tested publicly, repeatedly, and that the outcomes will matter to people beyond the researchers, ethicists, and policy professionals who normally care about this stuff.
Either way, someone bought a lot of chalk.
Frequently asked questions
What did the chalk messages actually say?
Multiple messages appeared at two locations. At Anthropic's downtown headquarters on Howard and First streets, messages focused on praising the company's "courage" in refusing the Pentagon's demands. At OpenAI's Mission Bay headquarters on Third Street, messages included "OpenAI Should Lead, Orwell Warned Us," "Show the contract," "Take a stand for civil liberty," "Don't help the government spy on Americans," "Do no evil," and "History is watching." Someone also drew a red chalk circle around OpenAI's building.
Why did the city keep washing the messages away?
San Francisco Public Works routinely removes chalk messages from sidewalks outside private businesses, particularly when they are interpreted as targeted protests. The cleanup is standard rather than politically motivated — the same rules that apply to any chalking outside a commercial building.
Did Anthropic or OpenAI respond to the chalk protests?
Neither company issued a public statement specifically about the chalk messages. Anthropic had already issued its public refusal of the Pentagon's demands earlier in the week. OpenAI's Sam Altman had announced the Pentagon deal and stated publicly that it included red lines against surveillance and autonomous weapons.
What is the "Show the contract" message about?
It is a demand for transparency. Altman stated that OpenAI's Pentagon deal included explicit prohibitions on mass domestic surveillance and autonomous weapons — the same restrictions Anthropic drew. Critics argued that stating red lines in a press announcement is different from having them enforced contractually, and that the public should be able to verify what OpenAI agreed to. As of the protests, the contract terms had not been made public.
Why did this become a meme so quickly?
Several elements combined. The visual of a red circle around OpenAI's building was immediately shareable. The phrase "God loves Anthropic" was absurd enough to be funny. The contrast with "Do no evil" — a reference to Google's original motto — connected to a widely understood cultural narrative about tech companies abandoning their founding principles. And the Altman "mostly trusted" clip was already circulating when the chalk photos appeared, so the two things reinforced each other.