Trump AI executive order: FTC & state law preemption
Trump's AI executive order preempts state AI laws as FTC comment deadline hits. What OpenAI, Anthropic, and 36 AGs are doing about it.
Whether you're looking for an angel investor, a growth advisor, or just want to connect — I'm always open to great ideas.
Get in TouchAI, startups & growth insights. No spam.
TL;DR: The Trump AI executive order signed March 9, 2026 directs federal agencies to deploy AI in 50%+ of public-facing services by December 2026 and contains a sweeping federal preemption clause that would override state AI consumer protection laws in California, New York, and Texas. The FTC's public comment window on AI commercial surveillance closed March 11 with over 40,000 submissions. OpenAI, Google, Microsoft, and Meta backed the preemption push; 36 state attorneys general are preparing lawsuits; Anthropic stayed silent.
The Federal Trade Commission opened public comment on its proposed "AI commercial surveillance" rule in late 2025. The proposal is conceptually direct: if a company wants to use your personal data, your emails, your medical records, your purchase history, to train an AI model, it must first ask you. Not buried in terms of service. Explicit opt-in consent.
The comment window closed March 11, 2026 with over 40,000 submissions, a filing volume that rivals the 2014 net neutrality proceeding. Privacy groups submitted technical arguments. Google's submission alone ran 300 pages. The AI industry, broadly, pushed back on the breadth of the opt-in requirement, arguing it would chill AI development by making data acquisition practically impossible at scale.
The FTC's proposed rule, if finalized, would be the first federal opt-in consent requirement for AI training data in US history.
But the rule carries a complication that makes today's deadline bigger than a bureaucratic event. The White House's December 2025 AI executive order directed the FTC to issue a separate policy statement, also due by March 11, declaring that state-mandated AI bias corrections constitute "unfair and deceptive acts or practices" under the FTC Act. If the commission issues that statement, the federal government officially claims preemption authority over every state AI consumer protection law in the country.
FTC Chair Andrew Ferguson is a Trump appointee who has publicly questioned whether aggressive AI regulation serves consumers or burdens companies without benefit. The commission could issue a narrow statement, a broad one, or craft language that satisfies the executive order's letter while preserving legal room to maneuver.
Whatever the FTC issues, litigation follows within days. Thirty-six state attorneys general have said so publicly. That is not speculation.
The March 9, 2026 executive order (building on the December 2025 framework formally titled "Ensuring a National Policy Framework for Artificial Intelligence") has three enforcement mechanisms that matter for the federal preemption fight.
The AI Litigation Task Force. A DOJ task force active since January 10, 2026, has explicit authority to challenge state AI laws in federal court using Commerce Clause arguments, conflict preemption claims, and a broad catch-all provision allowing the Attorney General to challenge any state law judged "otherwise unlawful." That last phrase is the widest net in the order. It gives the DOJ maximum flexibility to pick targets.
The Commerce Department review. The Secretary of Commerce has 90 days from the order's signing to survey all state AI laws and flag those conflicting with federal AI policy. The review specifically targets laws requiring AI models to "alter their truthful outputs," a formulation that captures anti-discrimination and bias-mitigation requirements. Under this framing, Colorado's algorithmic bias correction law becomes a federal enforcement target because the administration argues bias correction makes model outputs less "truthful."
The 50 percent federal AI mandate. Federal agencies must deploy AI in at least 50 percent of all public-facing government services by December 2026. Benefits processing, permit applications, information requests, customer service. This is a procurement deadline with hard compliance requirements, not an aspirational goal. Every federal contractor building for this market must optimize for federal standards, because that is where the contracts are. CNBC has tracked this lobbying ecosystem extensively.
What the order does not do is create a coherent federal AI regulatory framework to replace what it removes. It eliminates state frameworks. The replacement does not yet exist. That gap is where the real policy uncertainty lives, and where the litigation will focus.
"The Trump AI executive order removes state consumer protections before the federal replacement is built." That is the core criticism from the 36-state AG coalition, stated plainly.
Three states sit at the center of the federal preemption fight, each representing a different regulatory approach.
California Assembly Bill 2013 requires developers of AI systems trained on datasets exceeding one million records to publish training data summaries, intended uses, known limitations, and evaluation results. AB 1008 restricts how employers use AI-generated assessments in hiring and termination decisions. Both laws are actively enforced. California's legislature believes it has the authority and obligation to regulate AI companies headquartered within its borders.
The DOJ's AI Litigation Task Force has flagged both bills. California AG Rob Bonta has publicly committed to defending them. Legal analysts at The Verge's policy desk put the probability of federal court challenge at near-certainty once the FTC issues its preemption statement.
New York Local Law 144, in effect since 2023, requires employers using automated employment decision tools to conduct annual bias audits and publish results publicly. AG Letitia James leads the 36-state coalition opposing federal preemption and has been explicit: New York will litigate any federal attempt to override its employment discrimination protections.
This is the strongest legal ground for states. Employment discrimination law has deep federal-state interaction going back decades. Preempting New York's law requires the DOJ to argue that a state anti-discrimination requirement conflicts with federal policy. That argument is harder than it looks.
Texas presents the politically interesting case. The state has historically opposed federal overreach, and yet the Trump administration's preemption push benefits Texas's large technology sector. Texas's pending AI transparency bills are in legal gray territory. Child-safety carve-outs in the executive order likely protect the narrower bills, but broader transparency requirements face DOJ challenge.
Red-state legislators who philosophically oppose federal preemption find themselves aligned with blue-state consumer advocates against their own administration. That tension has not fully surfaced in Congress, but it is building, and it could fracture the Republican coalition on AI legislation before the 2026 midterms.
| Feature | Federal framework (EO) | California AB-2013 | New York LL-144 | Colorado bias law |
|---|---|---|---|---|
| Training data disclosure | ✗ | ✓ | ✗ | ✗ |
| Bias audit requirement | ✗ | ✗ | ✓ | ✓ |
| Opt-in consent for training | ✗ (FTC rule pending) | ✓ | ✗ | ✗ |
| Employment AI restrictions | ✗ | ✓ | ✓ | ✗ |
| Enforcement mechanism | DOJ litigation | CA AG + civil suits | NYC DCWP + AG | State enforcement |
| Preemption status (March 2026) | Active (contested) | Under DOJ challenge | Under DOJ challenge | Under DOJ challenge |
| Consumer private right of action | ✗ | ✓ | ✓ | ✓ |
The table shows the gap clearly. The federal executive order removes state protections without adding equivalent federal ones. Training data disclosure, bias audits, employment restrictions, and private rights of action all exist under state law and none exist under the federal framework the executive order creates.
Under federal preemption as currently structured, a California resident loses training data disclosure rights, bias audit access, and the ability to sue directly, all without gaining any equivalent federal protection.
The ACLU and EFF submitted FTC comment letters making exactly this point: the commercial surveillance rulemaking should be a consumer protection floor that states can exceed, directly contradicting the executive order's preemption framing. Both organizations have committed to filing amicus briefs in every state AG lawsuit.
The industry coalition supporting the Trump AI executive order's preemption clause is not subtle about its motivations. Those motivations are legitimate from a business standpoint, even if the policy outcome is contested.
OpenAI is the clearest beneficiary. Its $110 billion funding round anchored by Amazon, Nvidia, and SoftBank is oriented toward enterprise and government deployment. A single federal standard, even a permissive one, is better for OpenAI's business than managing compliance with California, Illinois, New York, Colorado, and Connecticut simultaneously. OpenAI's policy team has testified to Congress that federal preemption is a precondition for responsible nationwide AI deployment. That position is self-serving and probably accurate at the same time.
Google has 55 active AI products in various states of launch. Compliance with 50 different state disclosure and bias-audit regimes would require a compliance operation larger than most AI startups. Per MIT Technology Review's federal preemption analysis, Google would accept a modest federal standard in exchange for eliminating state-level variation. Google's 300-page FTC submission was its most detailed regulatory comment since the 2012 search antitrust inquiry.
Microsoft has 63 registered lobbyists working on AI policy. Its Copilot suite touches federal government, healthcare, financial services, and consumer markets, each operating under different state requirements. The company is also among the largest federal contractors and has direct financial interest in the 50 percent AI deployment mandate succeeding.
Meta has been the most aggressive, arguing that state AI regulation harms free expression by restricting what models can output. This blurs corporate interest and constitutional argument in ways that make Meta the most politically effective, if intellectually contested, voice in the preemption debate. TechCrunch's FTC AI surveillance coverage has documented how Meta's framing shifted congressional Republican opinion more than any other lobbying argument.
Combined AI policy lobbying spend exceeded $92 million in the first three quarters of 2025. The pro-preemption super PAC "Leading the Future" ran a $10 million ad campaign targeting swing-district House members with the uniform national AI policy message. That money bought congressional attention, if not yet congressional votes.
The argument the industry makes is not wrong on its own terms. Fifty different state AI compliance regimes is genuinely expensive and creates real barriers to deployment. The question the preemption critics ask is: expensive for whom, and at what cost to whom else?
In any other week, federal AI preemption would be Anthropic's fight. The company has positioned itself on AI safety, state-level consumer protection aligns with its public values, and it has taken uncomfortable public positions before, including Dario Amodei's public letter refusing unrestricted Pentagon military access to Claude.
But March 11, 2026 finds Anthropic in a complicated political position. The company is currently in active litigation with the Pentagon after refusing unrestricted military access and being designated a supply chain risk by the Department of Defense. An executive order directs all federal agencies to stop using Anthropic's technology. Loudly opposing the same White House's AI executive order would damage federal relationships the company needs for its lawsuit and for eventual restoration of federal contracts. You do not win a fight by opening a second front.
Anthropic submitted FTC comments consistent with its public process commitments. It issued no press release, no policy paper, and no CEO statement on state law preemption. For a company that published a refusal letter in full, the silence is conspicuous.
The strategic irony: federal preemption primarily benefits Anthropic's most aggressive competitors. A uniform, permissive federal standard makes it easier for companies with larger deployment footprints, OpenAI and Google specifically, to move fast without state-by-state friction. Anthropic's safety-first positioning works partly because state-level requirements push customers toward higher safety standards. Preempting those requirements erodes Anthropic's competitive differentiation in the enterprise markets where it competes best.
Anthropic probably understands this. But fighting on two fronts against the federal government simultaneously is not a viable strategy. The silence is a calculation, not a position.
The Anthropic Pentagon litigation is a useful lens for reading the company's regulatory behavior this week. When survival is in question, policy positions become secondary.
Senate Commerce Committee members Ted Cruz (R-TX) and Mark Warner (D-VA) introduced a bill in February 2026 that would codify federal AI preemption into statute, removing the executive order's vulnerability to court challenge and to reversal by a future president.
Where the executive order uses DOJ litigation and FTC preemption claims, the Cruz-Warner bill would create a federal AI safety and transparency framework administered by a new interagency council. States could maintain laws at least as protective as the federal floor, what lawyers call cooperative federalism, but could not impose requirements the council determines conflict with the national standard.
The consumer floor is what makes this bill different from the executive order. The executive order removes state protections and creates no federal floor. The Cruz-Warner bill creates a federal floor while limiting state ceilings.
That floor is why Warner signed on. The Virginia Democrat believes a well-designed federal framework delivers better consumer outcomes than fragmented state law. Cruz's motivation is simpler: limiting compliance burdens on Texas-based tech companies and financial institutions. The preemption provision is what Cruz needs; the consumer protection floor is what keeps Warner's name on the bill.
The bill's prospects are poor before the 2026 midterms. California's senators oppose it. New York's senators oppose it. Republican senators from states with active AI child safety bills are skeptical of the preemption language despite an explicit child safety carve-out. The realistic path: passes committee, stalls before a floor vote. If that happens, the executive order remains the operative instrument with all its legal vulnerabilities intact.
If the Cruz-Warner bill fails and the executive order survives litigation, the result is an uneven regulatory field with no federal floor, no state ceilings, and full litigation uncertainty for every company trying to build AI products for American consumers.
That outcome serves no one's interests well, including the industry coalition that backed the preemption push.
Brussels has watched the US preemption fight with the kind of attention that eventually produces regulatory action.
The European Commission's position, articulated through the AI Office established under the EU AI Act, is that US federal preemption creates the conditions the AI Act was designed to prevent. If AI companies face minimal US accountability while facing strict EU requirements, the incentive structure pushes product development toward the US and treats EU compliance as a tax rather than a standard.
Companies do not need to choose between markets. They develop AI under permissive US rules, then apply a compliance layer for EU distribution. The EU's requirements generate costs without generating the safety outcomes they were designed to produce, because the underlying models were built under different standards. MIT Technology Review's federal preemption analysis noted that EU trade officials have flagged this as a structural competitive disadvantage for European AI companies.
The practical implication is that EU AI Act compliance has become the de facto global enterprise baseline regardless of what happens to California's state laws. US companies selling into European markets must meet EU standards. The federal preemption fight is primarily domestic in its effects: it reduces protection for American consumers without changing requirements for companies serving global markets.
There is also a second-order risk. If the EU sees the US as creating deliberate regulatory arbitrage for its AI companies, it has policy tools to respond: data transfer restrictions, market access conditions, and digital trade provisions that could make EU market access conditional on adherence to higher safety standards. That response is not imminent. It is being discussed.
"US federal preemption of state AI laws reduces protection for American users without reducing compliance requirements for companies that serve European markets." That is the EU AI Office's core argument, and it is accurate.
The geopolitical dimension also matters. AI regulation is now a trade issue as much as a consumer protection one, and the US-EU relationship on AI governance is one of the few areas where both parties had begun to find common ground. The preemption push complicates that.
The next three months will determine whether today marks the start of a new federal AI regulatory era or the beginning of a prolonged legal fight with no clear winner.
Scenario 1: broad FTC preemption statement. Litigation begins immediately. Colorado, New York, and California attorneys general file within days. Cases move through district courts over 12 to 18 months. Courts will likely stay the preemption effect pending litigation in most jurisdictions. Uncertainty continues for every company trying to build compliant AI products. The 36-state coalition has pre-filed briefs ready.
Scenario 2: narrow or ambiguous FTC statement. The executive order's preemption mechanism stalls. The AI Litigation Task Force continues operating, but without clear FTC backing the legal theory weakens significantly. States continue enforcing their own laws. Companies face the patchwork compliance environment the industry lobbied against.
Scenario 3: Cruz-Warner bill advances. Binding statutory preemption, harder to challenge and more durable than an executive order. Companies get regulatory certainty. The question then is whether the consumer protection floor is high enough to replace what state laws provided. Early bill text suggests the floor is lower than California's current standard.
The FTC's commercial surveillance rulemaking, which is a separate process from the preemption policy statement, continues regardless. The March 11 comment window closure starts a 6 to 12 month review, proposed rule, and additional comment period. A final opt-in consent rule requiring companies to get permission before training on personal data is unlikely before late 2027, under any of the three scenarios.
For consumers, the situation is direct: your data trains AI models under terms you never read in detail. The FTC wants to change that. The White House wants to prevent states from changing that. Congress has not passed AI legislation. The regulatory space is empty at the federal level and contested at the state level.
The defining question of US AI regulation in 2026 is not technical. It is political: does federal interest in AI adoption speed outweigh states' interest in protecting their residents? Today's deadline does not answer that. But it forces everyone to choose a position.
For more on how AI companies are navigating federal-state regulatory conflict, see our analysis of OpenAI's federal contracting strategy and how AI companies handle regulatory uncertainty.
The Trump AI executive order signed March 9, 2026 directs federal agencies to deploy AI in 50% of public-facing services by December 2026 and creates three enforcement mechanisms to challenge state AI consumer protection laws: an AI Litigation Task Force in the DOJ, a Commerce Department review of conflicting state laws, and a directive to the FTC to issue a preemption policy statement. The order does not directly override state statutes but sets up the legal and enforcement structure to challenge them in federal court.
The proposed FTC rule on AI commercial surveillance would require companies to obtain explicit opt-in consent before using personal data, including browsing history, purchase records, location data, and communications, to train AI models. The comment period closed March 11, 2026 with over 40,000 public submissions. A final rule, if issued, would be the first federal opt-in consent requirement for AI training data and is unlikely before late 2027.
The DOJ's AI Litigation Task Force has flagged California's AB-2013 (AI transparency), AB-1008 (employment AI), New York's Local Law 144 (automated employment decision tools), and Colorado's algorithmic discrimination law. Texas's pending AI transparency bills are in a legal gray zone, with child-safety provisions likely protected and broader transparency requirements exposed to challenge.
No. Executive orders direct federal agencies; they cannot override state statutes without congressional action or a successful court ruling. The order's preemption mechanism works by directing the DOJ to sue states and directing the FTC to issue a policy statement asserting preemption authority. Actual preemption requires courts to agree. The 36-state AG coalition will contest that in federal court.
OpenAI's support for federal preemption is commercially rational. Its $110 billion funding round and enterprise government deployment strategy benefit from a single federal compliance standard rather than 50 different state regimes. OpenAI's policy team has testified that federal preemption is a precondition for nationwide AI deployment at scale. The position is both self-serving and arguably accurate from a compliance complexity standpoint.
Anthropic is in active litigation with the Pentagon after refusing unrestricted military access to Claude and being designated a supply chain risk by the DOD. An executive order bars all federal agencies from using Anthropic's technology. Publicly opposing the same administration's AI executive order would further damage the federal relationships Anthropic needs to win its lawsuit and restore federal contracts. The silence is a strategic calculation, not a policy position.
The Cruz-Warner bill, introduced February 2026 by Senators Ted Cruz (R-TX) and Mark Warner (D-VA), would codify federal AI preemption into statute and create a federal AI safety and transparency framework with a minimum consumer protection floor. States could maintain laws as protective as the federal floor but not exceed it significantly. The executive order has no federal consumer protection floor. The bill's prospects are weak before the 2026 midterms.
The European Commission and the AI Office established under the EU AI Act view US federal preemption as creating regulatory arbitrage: AI companies develop products under permissive US rules and apply a compliance layer only for EU market access. The EU's position is that this undermines the safety outcomes the AI Act was designed to produce. EU trade officials have flagged this as a structural competitive disadvantage for European AI companies.
The March 9, 2026 executive order requires federal agencies to use AI in at least 50 percent of public-facing government services, including benefits processing, permit applications, information requests, and customer service, by December 2026. This is a hard procurement deadline with compliance requirements, not a goal. Federal contractors must optimize for federal AI standards because that is where the government contracts are.
California's AI transparency law, which requires developers of large AI systems to publish training data summaries, intended uses, and limitations, faces DOJ challenge through the AI Litigation Task Force. Whether it survives depends on how federal courts interpret Commerce Clause preemption arguments. California AG Rob Bonta has committed to defending the law aggressively, and the state has pre-filed litigation response briefs.
Federal preemption that treats the FTC's commercial surveillance rule as a ceiling, rather than a floor, would reduce privacy protection for California residents compared to current state law. California's opt-in requirements for sensitive data processing exceed what the proposed FTC rule would require even if finalized. Consumers in preempted states lose enforceable rights without gaining equivalent federal protections.
The AI Litigation Task Force is a DOJ unit established January 10, 2026 under the December 2025 executive order. It has authority to challenge state AI laws in federal court using Commerce Clause arguments, conflict preemption claims, and a broad catch-all provision allowing the Attorney General to challenge any state law judged "otherwise unlawful." It is the executive order's primary enforcement arm.
Yes, for now. The executive order does not directly invalidate state laws. States continue enforcing their AI regulations unless a court rules otherwise. Companies should maintain compliance with state laws they currently follow until courts issue injunctions or rulings to the contrary. The period of litigation uncertainty, likely 12 to 18 months minimum, means state laws remain operative in most jurisdictions.
The 36-state coalition opposing federal preemption is led by New York AG Letitia James, California AG Rob Bonta, and Colorado AG Phil Weiser. It includes AGs from both red and blue states, including several Republican attorneys general who oppose the preemption on federalism grounds independent of their positions on AI regulation. The coalition has announced it will file lawsuits within days of any broad FTC preemption statement.
Both the ACLU and the Electronic Frontier Foundation submitted FTC comment letters arguing the commercial surveillance rulemaking should establish a minimum consumer protection floor that states can exceed, directly contradicting the executive order's preemption framing. Both organizations have committed to filing amicus briefs in every state AG lawsuit challenging a broad FTC preemption statement. Their legal argument centers on the FTC Act's relationship to state consumer protection law.
The FTC's proposed opt-in consent requirement for AI training data parallels GDPR's lawful basis requirements for personal data processing in the EU, which require a legal basis such as consent, legitimate interest, or contract for any data use. The FTC rule would not be equivalent to GDPR, which also covers broader data rights including access, deletion, and portability, but the opt-in consent mechanism specifically is the closest the US has come to a GDPR-style AI training data protection.
Continue complying with existing state laws until courts rule otherwise. A policy statement from the FTC is interpretive, not a final rule, and states will challenge it immediately in court. Build compliance programs around the most stringent requirements you currently face. California and EU AI Act compliance together cover most scenarios, and meeting both standards means you are compliant everywhere that currently has enforcement. Do not assume federal preemption makes state compliance optional until a court explicitly rules on it.
The 50 percent AI deployment mandate creates a significant federal procurement opportunity for AI companies that can meet federal standards and security requirements. Every major federal contractor must now plan for AI integration across public-facing services. This procurement shift benefits companies with existing federal contracting relationships and FedRAMP authorization, and it creates compliance requirements for the vendors building those systems.
Probably not. California's senators oppose it. New York's senators oppose it. Republican senators from states with AI child safety legislation are skeptical of the preemption language despite an explicit carve-out. The bill could pass the Commerce Committee but is unlikely to reach a floor vote before the November 2026 midterms. If it fails, the executive order remains the operative instrument with its legal vulnerabilities intact.
The March 11, 2026 comment period closure starts a review process that, under standard rulemaking timelines, involves a 6 to 12 month agency review, publication of a proposed rule, a second comment period, and final rule issuance. A final opt-in consent rule for AI training data is unlikely before late 2027. The process is also subject to change if FTC leadership changes or if Congress passes AI legislation that supersedes the rulemaking.
The biggest practical risk is prolonged regulatory uncertainty. If courts stay state laws pending litigation for 12 to 18 months but do not rule in the federal government's favor, companies face a period where they cannot reliably know which compliance standards apply. Building for the most stringent standard, California state law plus EU AI Act, is expensive but reduces legal exposure across all scenarios. Companies that reduced state compliance efforts based on the executive order and then lose in court face retrospective liability.
Read more on AI regulation trends in our coverage of how AI companies are managing federal regulatory risk and what the EU AI Act means for US companies.
Anthropic Pentagon ban explained: DOD named the AI lab a supply chain risk. Microsoft filed an amicus brief. Here's what it means for AI procurement.
Anthropic's State of Agentic Coding report reveals 1M+ Claude Code sessions, 40%+ multi-agent rates, and a 72% SWE-bench score reshaping software.
OpenAI's GPT-5.4 ships with native computer use and a 1M token context window, competing directly with Anthropic's Claude Opus 4.6 for agentic AI.