TL;DR: The U.S. Senate has officially greenlighted three commercial AI chatbots for staff use: OpenAI's ChatGPT Enterprise, Google's Gemini Chat, and Microsoft's Copilot Chat. The authorization came via a one-page memo from the Senate Sergeant at Arms' Chief Information Officer, first reported by The New York Times on March 10, 2026. Notably absent from the approved list: Anthropic's Claude and Elon Musk's Grok. The omission of Claude is particularly striking given that the House of Representatives already permits staff to use it alongside the same three approved tools. The timing — weeks after the Trump administration blacklisted Anthropic across the executive branch — suggests the Senate's list may be as much about politics as it is about technology.
What you will learn
- What the Senate memo actually says
- Which tools were approved and how they differ
- What staff are and are not allowed to do
- Why Claude was left off the list
- Why Grok is also missing — and why that is different
- The House vs. Senate split on AI policy
- The Pentagon shadow over Capitol Hill
- What this means for enterprise AI adoption in government
- How other governments are approaching AI at work
- Frequently asked questions
What the Senate memo says
The document at the center of this story is a single-page memo circulated internally by the Senate Sergeant at Arms' information technology office. 404 Media published the full memo after the story broke, and it is worth reading carefully because the details matter.
The memo frames the approvals explicitly as an upgrade from an earlier, more restrictive regime. As FedScoop noted, "the new approvals are less restrictive on the type of data that can be ingested, opening the door to more widespread use." That is a meaningful shift. Previous iterations of Senate AI guidance required staff to treat commercial AI tools as inherently untrusted endpoints — meaning any data shared with them had to be considered fully public. The new memo changes that calculus, at least for the three approved vendors.
Three tools are cleared:
- Microsoft Copilot Chat — available immediately, at no cost, for all Senate employees. Copilot runs inside the Senate's existing Microsoft 365 Government environment and is covered by the same security and compliance controls as other Microsoft services the Senate already uses.
- Google Gemini Chat — approved, with per-employee licenses available from the SAA (Senate Sergeant at Arms). Licensing details were to follow within 30 days of the memo's circulation.
- OpenAI ChatGPT Enterprise — approved, with per-employee licenses also available from the SAA. Each employee receives one free license for either Gemini or ChatGPT Enterprise — not both — allocated through the SAA's office.
The memo was signed by the SAA's Chief Information Officer, not by a senator or committee chair. That procedural detail matters: this is an administrative technology policy, not a legislative act. It can be updated, expanded, or reversed by the same office without a floor vote.
The three approved tools are not interchangeable. Each brings different security architectures, pricing structures, and underlying model capabilities. Understanding the differences explains why the Senate chose to offer all three rather than standardizing on one.
Microsoft Copilot Chat is the path-of-least-resistance approval. The Senate already uses Microsoft 365 for email, documents, and collaboration. Copilot integrates directly into that environment without requiring new vendor relationships or data-sharing agreements. The memo notes explicitly that "Copilot Chat does not have access to any Senate data unless that information is explicitly shared within a prompt. Copilot does not search internal drives, shared folders, email, Teams chats, or any other Senate resources on its own." It operates within Microsoft's secure government cloud and was already meeting federal cybersecurity requirements before this memo was issued.
Google Gemini Chat runs on Google's cloud infrastructure and provides access to Google's Gemini model family. For Senate staff who already use Google Workspace tools, Gemini integrates naturally into their existing workflows. Google's enterprise agreements include provisions around data retention, model training exclusions, and audit logging that meet the Senate's baseline security requirements.
OpenAI ChatGPT Enterprise is the consumer-familiar option with the highest brand recognition. The Enterprise tier — distinct from the free ChatGPT and ChatGPT Plus products — includes data isolation, no model training on user inputs, and team management controls. It is the version OpenAI markets specifically to regulated industries and government adjacent organizations.
The one-license-per-employee structure for Gemini and ChatGPT is a deliberate choice. Rather than asking staff to choose for themselves or relying on individual expense reimbursements, the SAA is managing the licenses centrally. This gives the office visibility into adoption rates and allows for security auditing at the account level.
What staff can and cannot do
The memo is explicit about the permitted use cases and the guardrails. Approved uses center on routine, non-sensitive administrative work:
- Drafting and editing memos, briefings, and internal reports
- Summarizing lengthy documents, hearings, and reports
- Preparing talking points and background materials for members
- Research and analysis drawing on publicly available information
- Note-taking and meeting preparation
The restrictions are equally specific. Staff are prohibited from entering:
- Personally identifiable information (PII) about constituents, staff, or third parties
- Classified or controlled unclassified information
- Physical security details about Senate facilities, members, or staff
- Sensitive legislative strategy or privileged communications
The memo also requires that all AI-generated output be reviewed and validated by a human before use in any official capacity. This is a standard guardrail in government AI policies, designed to prevent AI hallucinations from becoming embedded in official documents or communications.
What is notably absent from the restrictions: any prohibition on using these tools for constituent correspondence or legislative drafting. The House's September 2024 AI policy requires manager approval for those more sensitive use cases. The Senate memo does not appear to create that same two-tier approval structure, though the underlying spirit of the guidance — use these for routine tasks, apply judgment for anything sensitive — is similar.
Why Claude was left off the list
The question everyone is asking is the one the memo does not answer: why is Claude not on the list?
Anthropic's Claude is, by most technical benchmarks, a peer to or ahead of the models offered by the three approved vendors. Claude 3.5 Sonnet and Claude 3.5 Haiku have topped coding and reasoning benchmarks. Claude's constitutional AI approach is specifically designed to make the model more careful about harmful outputs — a feature that ought to make it more appealing to government users, not less.
The most obvious explanation is political. On February 27, 2026, President Trump signed an executive order directing every federal agency to cease all use of Anthropic's technology. The State Department confirmed it was removing Claude. The Department of Health and Human Services and the Treasury Department followed. The Pentagon formalized Anthropic's designation as a supply chain risk under 10 U.S.C. § 3252. Defense tech companies that had been using Claude in their workflows began dropping the tool to avoid compliance exposure.
The Senate is an independent legislative branch, technically insulated from executive orders that bind the executive agencies. The Sergeant at Arms is not subject to the same directives that govern the State Department or the Pentagon. There is no legal reason the Senate could not have put Claude on the approved list.
But there is a political reason. The Republican-led Senate is unlikely to take a stance that reads as counter-programming to a Trump executive order, even in a domain — technology procurement — where legislative independence is well established. The absence of Claude from the list is almost certainly an act of political discretion rather than a technical finding.
It is also worth noting what the memo does not say: it does not say Claude is banned, unsafe, or prohibited. It simply does not include it. That is a softer posture than the executive branch's explicit rejection — one that leaves the door open for future addition without requiring anyone to admit the initial omission was politically motivated.
The contrast with the House is telling. As the POPVOX Foundation's tracker documents, the House permits staff to use Claude alongside ChatGPT, Gemini, and Copilot. That policy predates the February 2026 executive order, which means the House list was set when Anthropic was still in good standing with the federal government. If the Senate had been evaluating tools on purely technical grounds, Claude's House approval would provide a precedent for inclusion.
Why Grok is also missing — and why that is different
Grok's absence from the approved list is less surprising and more straightforward than Claude's — but it still tells a story.
xAI's Grok, owned by Elon Musk, agreed in February 2026 to provide the Pentagon with unrestricted access for "any lawful use" — the exact posture the Pentagon demanded from Anthropic and that Anthropic refused. That made Grok the Trump administration's preferred AI vendor in the executive branch standoff. But the Senate, regardless of party alignment, has an institutional interest in maintaining distance from any technology that is too closely identified with a single political figure.
There is also a technical and maturity argument. Grok's enterprise offering is newer and less battle-tested in regulated, compliance-heavy environments than ChatGPT Enterprise, Gemini's Workspace integration, or Microsoft's government cloud products. The three approved tools have all undergone FEDRAMP authorization processes or equivalent security reviews. Grok has not established the same compliance posture for government deployments.
The absence of Grok may also reflect a different concern: brand association. Elon Musk's role in the Trump administration — including his leadership of the Department of Government Efficiency — makes Grok politically complicated in a way that none of the other tools are. Even senators who agree with the administration on most policy matters have an interest in ensuring that their staff's AI tools are not seen as instruments of executive branch influence operations.
The House vs. Senate split on AI policy
The divergence between House and Senate AI policies is worth examining in its own right, because it reveals that there is no unified congressional approach to AI adoption — even within the same party caucus.
The House adopted a formal AI usage policy in September 2024, creating a structured framework with tiered approvals. Basic tasks can be done with AI without special permission. Higher-stakes uses — drafting constituent correspondence, preparing member talking points, generating anything that will leave the office — require manager sign-off. The policy was written to be technology-agnostic and has been updated as new tools have been approved for the HouseNet approved list.
The Senate's memo is a different kind of document. It is narrower in scope, focused on specific tool approvals rather than establishing a comprehensive framework for AI use. It does not create the same tiered approval structure. And critically, its approved list is a subset of the House's list — three tools versus four.
That divergence may close over time. The Senate memo signals that the SAA's office is actively evaluating AI tools, not treating this as a static list. The 30-day window for additional Gemini and ChatGPT licensing details suggests the office is building operational infrastructure for ongoing AI governance, not just issuing a one-time permission slip.
The institutional asymmetry also reflects different risk tolerances. The House, as a larger body with 435 members and significantly larger staff, has more surface area for AI adoption and a stronger operational incentive to move quickly. The Senate's 100 members and more deliberate culture tends toward more cautious institutional change.
The Pentagon shadow over Capitol Hill
It would be a mistake to analyze the Senate's AI approval list without accounting for the broader context: the executive branch's ongoing campaign to reshape which AI companies are acceptable partners for the U.S. government.
The Trump administration's actions against Anthropic — the executive order, the supply chain designation, the agency-by-agency removal of Claude — represent an attempt to use federal procurement power to influence how AI companies structure their products and safety policies. The implicit message: AI companies that comply with government demands for unrestricted access get contracts and approvals; those that resist get blacklisted.
As TechRadar reported, the State Department's move to drop Claude and the Senate's approval of three other tools are happening simultaneously — not coincidentally. The government's AI vendor landscape is being reshaped by a combination of technical procurement decisions and political pressure.
For the Senate, this creates a genuine governance challenge. The legislative branch exists precisely to provide a check on executive overreach. But individual senators and their staff have practical needs — they want AI tools that work, are secure, and do not create political problems. The memo's approved list threads that needle by authorizing the three tools that have either political neutrality (Microsoft, Google) or explicit executive branch favor (OpenAI, which struck its own Pentagon deal after Anthropic's fell apart), while quietly leaving out the two that are politically complicated.
Anthropic, for its part, is not standing still. The company has sued the Pentagon over the supply chain risk designation, arguing that the statute was never meant to be used against a domestic AI company in a policy dispute. If Anthropic prevails in court — which could take years — the legal basis for the current Claude exclusion from executive branch tools evaporates. Whether that would prompt the Senate to add Claude to its approved list is another question.
What this means for enterprise AI adoption in government
The Senate's approval, despite its politically legible omissions, is a meaningful milestone for enterprise AI adoption in the public sector. Until recently, the dominant posture in government was prohibition by default: commercial AI tools were not approved, staff used them anyway (often on personal devices to avoid IT oversight), and agencies had no visibility into how sensitive information was being handled.
The Senate memo flips that dynamic. By creating a formal approved list with enterprise accounts that the SAA manages centrally, the Senate gains audit visibility into AI usage, data isolation guarantees from the vendors, and a policy framework that staff can actually follow. That is a more mature approach than the "don't ask, don't tell" regime that characterized government AI adoption for the first two years after ChatGPT's launch.
It also creates a template for other legislative bodies and state governments. Seeing the Senate provide enterprise accounts with centrally managed licenses, security-reviewed tools, and explicit use-case guidance gives other institutions a model to follow. Several state legislatures have been watching federal AI adoption carefully before committing to their own policies. The Senate memo gives them a reference point.
The enterprise tier requirement is particularly important. The Senate did not approve personal ChatGPT accounts. It approved ChatGPT Enterprise — the version with data isolation, no training on user inputs, and team admin controls. That distinction matters for data governance. It means a senator's staff cannot simply use their personal OpenAI credentials for official work; they need accounts provisioned through the SAA. That creates traceability, which is a prerequisite for accountability.
How other governments are approaching AI at work
The Senate's move fits into a global pattern of government institutions cautiously opening the door to commercial AI tools while building guardrails around sensitive use cases.
The United Kingdom's Cabinet Office issued AI usage guidance for civil servants in 2024 that follows a similar logic: approved tools for routine tasks, stricter requirements for anything touching sensitive information. The UK's approved tool list skews toward Microsoft's Copilot and Google's Workspace AI, consistent with existing government cloud contracts.
The European Union's institutions have been more cautious. The European Commission and European Parliament have generally restricted staff from using commercial AI tools on official networks, citing concerns about data sovereignty and the risk that EU legislative deliberations could end up in training data for U.S.-based AI companies. The EU AI Act's requirements around high-risk systems add another layer of compliance complexity.
Canada's Treasury Board Secretariat has issued AI directives that prioritize government-developed or government-contracted AI systems over commercial tools, reflecting a different philosophical approach to digital sovereignty.
The U.S. Senate's approach — choose from approved commercial vendors, manage licenses centrally, set explicit data guardrails — is closer to the UK model than the EU model. It reflects a pragmatic view that the productivity benefits of commercial AI are real and that the right response to data security concerns is enterprise-grade controls rather than prohibition.
What none of these governments have resolved is the deeper question the Senate memo sidesteps: how to evaluate AI tools on their merits when procurement decisions are intertwined with geopolitics, domestic politics, and the evolving legal landscape around AI safety. The Senate approved the tools it approved. The tools it did not approve will keep being evaluated — at least officially. Whether Claude eventually makes the Senate's list may depend less on its capabilities than on how Anthropic's lawsuit against the Pentagon resolves.
Frequently asked questions
What exactly did the Senate approve?
The Senate Sergeant at Arms' Chief Information Officer issued a memo approving three AI chatbots for official staff use: Microsoft Copilot Chat, Google Gemini Chat, and OpenAI ChatGPT Enterprise. Copilot is available immediately at no cost through the Senate's existing Microsoft 365 environment. Each employee gets one free license for either Gemini or ChatGPT Enterprise through the SAA's office.
When did this happen?
The memo was circulated internally in early March 2026. The New York Times first reported on it on March 10, 2026. 404 Media published the full text of the memo shortly thereafter.
Why was Claude not approved?
The memo does not give a reason. Claude is approved for use in the House but not the Senate. The most likely explanation is political: the Trump administration's executive order and supply chain risk designation against Anthropic have made Claude politically radioactive in the Republican-led Senate, even though the Senate is technically insulated from executive branch procurement directives. No technical finding was cited.
Why was Grok not approved?
Grok was also absent from the approved list. Unlike Claude, Grok has the current administration's political favor — xAI gave the Pentagon unrestricted access to Grok. But Grok has not completed the government cloud security certifications (FedRAMP or equivalent) that the three approved tools have, and its brand association with Elon Musk creates institutional complications for the Senate specifically.
Approved uses include drafting and editing documents, summarizing information, preparing talking points, and conducting research on publicly available topics. Staff may not enter personally identifiable information, classified material, controlled unclassified information, or physical security details into any of the approved tools.
Does all AI output need human review?
Yes. The memo requires that all AI-generated content be reviewed and validated by a human before use in any official capacity. This is a standard requirement in government AI policies and is designed to prevent AI errors from becoming embedded in official communications or documents.
How does this compare to what the House allows?
The House permits ChatGPT, Gemini, Copilot, and Claude — four tools. The Senate's approved list has three, with Claude absent. The House also has a more detailed tiered policy: basic AI tasks are self-approved, while higher-stakes uses like drafting constituent correspondence require manager sign-off. The Senate memo does not appear to create the same tiered structure.
The memo specifically addresses staff use. Whether senators themselves fall under the same policy or are treated differently is not addressed in the publicly available memo text. In practice, members of Congress use technology at their own discretion, subject to the same security requirements as their offices.
Does Copilot have access to Senate emails and files?
No. The memo explicitly states that Copilot Chat does not have access to Senate data unless information is explicitly shared in a prompt. It does not search internal drives, shared folders, email, Teams chats, or other Senate resources autonomously. It operates in Microsoft's secure government cloud under the same controls as other Senate Microsoft 365 data.
Will Claude ever be added to the Senate's approved list?
Possibly, but not in the near term. Anthropic is currently suing the Pentagon over the supply chain risk designation, and that litigation is expected to take years. If Anthropic prevails, the political context around Claude could change enough for the Senate SAA's office to consider adding it. For now, the memo's approved list is a snapshot of the current political and operational environment, not a permanent exclusion.
Is this a permanent policy or can it change?
The memo is an administrative technology policy, not a law. The Senate Sergeant at Arms' office can update, expand, or reverse the approved list without a floor vote. If political conditions change — for example, if the Anthropic lawsuit succeeds or if a new administration takes a different posture toward Anthropic — the list could be revised relatively quickly.
What is FedRAMP, and does it matter here?
FedRAMP (Federal Risk and Authorization Management Program) is the U.S. government's standardized approach to security assessment and authorization for cloud services. All three approved Senate tools have either FedRAMP authorization or equivalent security certifications for government use. This is one reason they were easier to approve than newer or less-established tools. Grok's absence from government security certification programs is one factor in its exclusion.
What does the Senate approval mean for OpenAI, Google, and Microsoft?
It is a meaningful validation for all three. Enterprise government contracts are valuable not just for revenue but for the credibility and security credentials they provide to other enterprise customers. Being on the Senate's approved list makes it easier to win contracts with regulated industries, state governments, and other institutions that look to federal approvals as a baseline.
How does this fit into the broader government AI trend?
Federal agencies and legislative bodies are moving from ad hoc AI usage — staff using personal accounts without oversight — to formal enterprise adoption with security-reviewed tools, central license management, and explicit use-case policies. The Senate's memo is part of that transition. Similar moves are underway in state governments, allied foreign governments, and large regulated industries like healthcare and finance.
The memo does not specify enforcement mechanisms. In most government contexts, using non-approved technology tools violates IT security policy and can result in disciplinary action. Whether the Senate enforces that through monitoring, auditing, or employee acknowledgment agreements is not addressed in the public memo text.
Is Anthropic challenging its executive branch exclusion?
Yes. Anthropic filed suit against the Pentagon and other federal agencies in March 2026 over the supply chain risk designation. The lawsuit argues that the statute used to justify the designation — 10 U.S.C. § 3252 — was designed for foreign-linked hardware suppliers, not domestic AI companies in a policy dispute. The case is in early stages and is expected to take years to resolve.
What data restrictions apply to ChatGPT Enterprise specifically?
ChatGPT Enterprise includes data isolation (user data is not shared across organizations), no model training on user inputs, and team admin controls. Inputs are processed in OpenAI's cloud infrastructure under a Business Associate Agreement-equivalent data processing addendum. For Senate use, staff are also bound by the memo's explicit restrictions on PII, classified material, and physical security information.
The memo does not appear to restrict device type for approved tools, but the enterprise account requirement effectively limits use to accounts provisioned by the SAA. A staffer using their personal ChatGPT Plus account would be using a different tier, not the approved enterprise accounts with the data governance features the Senate vetted. Standard Senate IT policy likely requires use of approved accounts on managed or compliant devices.
What would it take for the Senate to add Claude to the approved list?
A shift in the political context is the most likely prerequisite. On the technical side, Anthropic would need to complete any additional government security certifications the SAA requires, but Claude already meets enterprise-grade security standards that have been acceptable to the House. The more significant barrier is the current administration's posture toward Anthropic and the Senate's institutional reluctance to take a stance that reads as counter to that posture.
Where can I read the full memo?
404 Media published the complete text of the memo after the initial NYT report. FedScoop and Storyboard18 also reported detailed breakdowns of the memo's contents.