Trump's federal preemption push vs. state AI laws: the Colorado showdown explained
Trump's executive order targets state AI laws with federal preemption. Colorado and 36 AGs are fighting back. Here is what it means.
Whether you're looking for an angel investor, a growth advisor, or just want to connect — I'm always open to great ideas.
Get in TouchAI, startups & growth insights. No spam.
TL;DR: President Trump signed an executive order on December 11, 2025, creating a DOJ "AI Litigation Task Force" to challenge state AI laws in federal court. Colorado's landmark anti-algorithmic-discrimination law is the first target. A bipartisan coalition of 36 state attorneys general is pushing back, and the FTC has until March 11, 2026, to classify state bias-mitigation mandates as "deceptive trade practices." This is the biggest federal-state power struggle over technology regulation in years.
On December 11, 2025, President Trump signed an executive order titled "Ensuring a National Policy Framework for Artificial Intelligence." The title is polite. The mechanism is not.
The order establishes three concrete enforcement tools.
First, it creates an AI Litigation Task Force inside the Department of Justice. Starting January 10, 2026, this task force has the explicit mandate to challenge state AI laws in federal court. The legal arguments it can deploy include claims that state laws unconstitutionally burden interstate commerce, conflict with federal regulations, or are "otherwise unlawful" in the Attorney General's judgment. That last phrase is doing a lot of heavy lifting.
Second, the Secretary of Commerce has 90 days to review all state AI laws and identify those that conflict with the administration's vision of national AI policy. The review specifically targets laws requiring AI models to "alter their truthful outputs" or compelling disclosures the administration considers violations of the First Amendment.
Third, and this is where money enters the picture, the order instructs the Department of Commerce to condition $42 billion in broadband infrastructure funding from the BEAD program on states repealing AI regulations deemed "onerous." That is not a subtle incentive. It is a financial threat directed at state budgets.
There are carve-outs. The order exempts state laws covering child safety, AI compute and data center infrastructure, and state government procurement. But these exemptions are narrower than they appear. Most of the state-level AI regulation gaining momentum in 2026 falls outside these safe harbors.
"The Executive Order signals an aggressive federal preemption strategy and is expected to face significant legal challenges as states push back against what they characterize as an encroachment on long-standing state authority." -- @PaulHastingsLLP
Colorado's AI Act, formally SB24-205, is the most comprehensive state-level AI anti-discrimination law in the country. It was signed into law in 2024 and targets "algorithmic discrimination," defined as any condition where an AI system produces unlawful differential treatment or impact based on protected characteristics.
The law requires both AI developers and deployers to exercise "reasonable care" to prevent algorithmic discrimination. Developers must furnish technical documentation, publish public statements about their models' risk profiles, and notify the Colorado Attorney General within 90 days of discovering that a high-risk system caused or is likely to have caused discriminatory outcomes. Deployers must adopt risk-management policies, perform annual impact assessments, and issue consumer notices before and after adverse decisions.
The enforcement date has shifted. Originally scheduled for February 1, 2026, Governor Jared Polis signed SB 25B-004 in August 2025, pushing the operative date to June 30, 2026. That delay came after intense lobbying from the tech industry during a special legislative session.
But the Trump executive order is not about delay. It is about elimination. The administration's position is that requiring AI models to correct for bias forces them to produce outputs that are less "truthful," and therefore constitutes deception. Under this framing, Colorado's entire anti-discrimination framework becomes a federal enforcement target.
The Colorado Attorney General's office has exclusive enforcement authority over the AI Act and has begun the rulemaking process. The AG has signaled publicly that the state intends to defend its law against federal challenges.
This sets up a direct collision. The DOJ's AI Litigation Task Force was designed for exactly this type of confrontation. Colorado, with its detailed compliance requirements and aggressive AG, is the obvious first test case.
Section 7 of the executive order directs the FTC to issue a policy statement within 90 days on how the FTC Act's prohibition on "unfair and deceptive acts or practices" applies to AI models. The deadline is March 11, 2026.
The legal theory behind this is creative and aggressive. The administration argues that if an AI model is trained on data reflecting societal patterns, forcing developers to alter the model's outputs to mitigate bias compels them to produce results less faithful to the underlying data. Under this interpretation, bias mitigation renders the model less "truthful" and therefore deceptive.
Read that again. The executive order is positioning civil rights protections as consumer fraud.
There are real legal limits here. Policy statements are interpretive, not binding regulations. Courts can and do reject FTC interpretive positions. The FTC's preemption authority under Section 5 of the FTC Act has historically been narrow, and using it to override state anti-discrimination law would be legally unprecedented.
But the strategy is not necessarily about winning in court. A formal FTC policy statement, even if ultimately struck down, creates immediate uncertainty for companies trying to comply with state laws. If the FTC says state-mandated bias mitigation is "deceptive," corporate legal departments will hesitate to implement it, regardless of whether courts later disagree. The chilling effect is the point.
"Policy statements are interpretive rather than binding regulations, and courts may reject the premise that correcting for bias constitutes deception." -- @TechPolicyPress
While the executive order targets comprehensive AI governance laws like Colorado's, a second wave of state legislation is building underneath it. According to the Transparency Coalition, there are now 78 chatbot-specific bills alive in 27 states. More broadly, lawmakers are tracking over 300 AI-related bills across the country just one month into 2026.
The chatbot bills focus primarily on safety protections for minors interacting with AI companion apps. This is a rare bipartisan issue. Bills are advancing in traditionally conservative states like Utah, Nebraska, Alabama, Tennessee, and Oklahoma alongside progressive ones like California, Oregon, New York, and Hawaii.
Here is the state-level breakdown of AI regulatory activity as of February 2026:
| State | AI law type | Status | Targeted by EO? |
|---|---|---|---|
| Colorado | Anti-algorithmic discrimination | Delayed to June 2026 | ✓ Yes |
| California | Multiple transparency + employment laws | Active enforcement | ✓ Yes |
| Illinois | AI hiring bias (AIDA) | In effect | ✓ Yes |
| New York City | Automated employment decision tools | In effect | ✓ Yes |
| Texas | AI transparency bills | Pending 2026 | Likely |
| Connecticut | AI governance framework | Active | Likely |
| Utah | Chatbot safety for minors | Pending 2026 | ✗ Exempt (child safety) |
| Tennessee | AI chatbot disclosures | Pending 2026 | ✗ Exempt (child safety) |
The child safety carve-out in the executive order creates an interesting split. States focusing narrowly on chatbot safety for kids are largely shielded. But states pursuing broader algorithmic accountability, employment discrimination, or transparency requirements are directly in the crosshairs.
The volume of legislative activity also matters for the preemption argument itself. Tech companies and their trade groups argue that 300+ bills across 50 states create an impossible compliance environment. That argument gets stronger as more states pass different versions of AI regulation with conflicting requirements.
The lobbying campaign behind federal preemption is the most expensive corporate influence operation in technology policy since the net neutrality fights of the 2010s.
According to Read Sludge, more than one in four federal lobbyists, 3,570 individuals representing 26% of all registered lobbyists, reported working on AI issues in 2025. Between 2022 and 2025, the number of lobbyists working on AI issues rose by 168%.
The U.S. Chamber of Commerce had 91 lobbyists on AI issues in 2025. Microsoft deployed 63. Meta had 55. Intuit, notably one of OpenAI's early Frontier enterprise customers, fielded 51.
The pro-industry super PAC "Leading the Future" launched a $10 million ad and lobbying campaign specifically to push Congress toward a "uniform national AI policy" that would preempt state laws. OpenAI, Google, TechNet, Microsoft, Business Roundtable, and Andreessen Horowitz are all on record supporting federal preemption.
Registered lobbying firms collected nearly $92 million in the first three quarters of 2025 from AI-related issues. And 91 of the top 100 entities hiring the most AI lobbyists were corporations or corporate trade groups.
The industry argument is straightforward. A patchwork of 50 different state AI laws creates an unworkable compliance burden. Companies building AI models that serve customers nationwide cannot realistically comply with Colorado's algorithmic discrimination rules, California's transparency mandates, Illinois's hiring bias requirements, and New York City's automated decision tool regulations simultaneously, especially when those laws define terms differently and impose conflicting obligations.
That argument has genuine merit from a practical standpoint. But critics point out that the push for "uniform" rules in practice means "fewer" rules. The federal framework the industry envisions is lighter than what states like Colorado have enacted.
"Big Tech companies and their trade groups supported and lobbied for the moratorium's inclusion in the reconciliation bill." -- @EconomicPolicy
The state-level response has been swift, organized, and bipartisan.
On November 25, 2025, the National Association of Attorneys General sent a letter to Congressional leaders on behalf of a bipartisan coalition of 36 state attorneys general. The letter urged Congress to reject proposals for a federal moratorium on state AI law enforcement.
The AGs argued that broad federal preemption would undermine states' ability to respond quickly to emerging AI risks. They positioned state regulation as the only functional consumer protection mechanism available, given the absence of comprehensive federal AI legislation.
Three weeks later, on December 17, 2025, a group of 23 attorneys general sent a separate letter to the FCC, opposing the commission's efforts to preempt state AI laws.
New York Attorney General Letitia James led both efforts, calling the proposed federal moratorium "irresponsible" and arguing that it would leave consumers without protection during a critical period of AI deployment.
The opposition extends beyond government officials. More than 140 civil rights and consumer protection organizations signed a letter to Congress opposing preemption. The signatories include the Center for Democracy and Technology, the Brennan Center for Justice, the Electronic Privacy Information Center, and the Southern Poverty Law Center.
The legal battle lines are now clearly drawn. The DOJ's AI Litigation Task Force on one side. A 36-state coalition of AGs, backed by over 140 civil society organizations, on the other. This will end up in federal court, probably multiple federal courts simultaneously.
The contrast between the EU and US approaches to AI regulation in 2026 could not be sharper.
The EU AI Act is a single, comprehensive, binding regulation that applies uniformly across all 27 member states. It classifies AI systems into risk tiers, from minimal to unacceptable, and applies progressively stricter requirements at each level. High-risk systems must meet specific standards for transparency, data governance, human oversight, and accuracy before they can be deployed.
The US has no federal equivalent. What it has instead is an executive order attempting to prevent states from filling the gap, combined with no congressional movement toward a comprehensive alternative.
| Dimension | EU AI Act | US federal approach | US state approach |
|---|---|---|---|
| Legal structure | Binding regulation | Executive order (not law) | Individual state laws |
| Scope | All 27 member states | Aspirational nationwide | State-by-state |
| Risk classification | Four-tier system | None | Varies by state |
| Bias mitigation | Required for high-risk systems | Called "deceptive" by EO | Required in CO, IL, NYC |
| Enforcement body | National authorities + EU AI Office | DOJ task force (new) | State AGs |
| Timeline | Phased enforcement 2024-2027 | March 11, 2026 FTC deadline | Varies |
| Congressional approval needed | N/A (EU regulation) | Yes for binding law, no for EO | N/A |
The EU's approach has its own critics. Companies complain about compliance costs, and some argue the regulation stifles innovation. But the EU has the advantage of legal clarity. Companies operating in Europe know exactly what they must do.
In the US, companies face the worst of both worlds. State laws that might be preempted tomorrow, and a federal framework that consists entirely of an executive order with uncertain legal authority. An executive order is not legislation. It can be reversed by the next president. It can be struck down by courts. It creates policy uncertainty rather than resolving it.
For multinational companies, the practical effect is that EU AI Act compliance becomes the global baseline. If you are already meeting EU requirements, you are likely compliant with most state laws. The federal preemption fight is largely irrelevant to your operations, because the EU bar is higher than anything US states have proposed.
The next critical date is March 11, 2026. That is when the FTC must issue its policy statement on whether state-mandated bias mitigation constitutes deceptive trade practices.
Several outcomes are possible. The FTC could issue a narrow statement that affirms some forms of preemption while preserving room for state regulation. It could issue the broad statement the executive order envisions, declaring all state bias-mitigation mandates deceptive. Or, given the FTC's current composition and internal politics, it could issue a statement that technically complies with the executive order but leaves enough ambiguity that states can continue enforcement.
Whatever the FTC does, litigation follows immediately. Colorado's AG has the rulemaking infrastructure already built. New York's AG has led the coalition efforts. Both states have the legal resources and political motivation to challenge a broad FTC preemption claim.
The constitutional questions are genuinely complex. Does the Commerce Clause give the federal government authority to preempt state consumer protection laws governing AI? Can the FTC Act's prohibition on deception be stretched to cover state-mandated bias correction? Can an executive order, without congressional legislation, preempt state statutes?
Meanwhile, the 78 chatbot bills continue to move through state legislatures largely unaffected by the federal preemption fight. The child safety carve-out in the executive order means that the fastest-growing category of state AI regulation is explicitly shielded.
The broader political calculus also matters. AI regulation is bipartisan at the state level, with chatbot safety bills advancing in red states and blue states alike. Federal preemption that overrides consumer protections popular with voters in both parties is a politically risky strategy, especially heading into the 2026 midterm elections.
For companies building or deploying AI systems in the United States, the practical advice is simple but unsatisfying. Comply with existing state laws until a court says you do not have to. The executive order creates political pressure, not legal certainty. Until the FTC acts, until courts rule, and until Congress passes actual legislation, state laws remain enforceable.
Not directly. Executive orders direct federal agencies, but they cannot override state statutes without congressional legislation or a court ruling. The order creates an enforcement mechanism through the DOJ task force and the FTC, but actual preemption requires either new federal law or successful litigation. Gibson Dunn's analysis notes this distinction clearly.
Yes, but its enforcement date was delayed from February 1, 2026, to June 30, 2026, after Governor Polis signed SB 25B-004 in August 2025. The rulemaking process at the Colorado AG's office continues.
The FTC policy statement would be interpretive, not a binding regulation. States would likely challenge it in court immediately. Companies would face a period of legal uncertainty where both the FTC position and state laws claim authority. In practice, most legal experts expect courts to take years to resolve the conflict.
Illinois (AI hiring bias under AIDA), New York City (automated employment decision tools under Local Law 144), and several states with narrower AI transparency and disclosure requirements. California has multiple active AI laws covering different domains. Colorado's comprehensive law takes effect June 30, 2026.
Not directly. The order explicitly exempts state laws related to child safety from preemption. The 78 chatbot bills in 27 states, which focus primarily on protecting minors, fall under this carve-out.
As of 2025, 3,570 federal lobbyists, representing 26% of all registered lobbyists, reported working on AI issues. The number grew 168% between 2022 and 2025.
The executive order directs the Department of Commerce to condition broadband infrastructure funding on states repealing AI regulations the federal government considers onerous. This uses existing federal spending as pressure to change state-level policy.
The EU has a single, binding regulation covering all member states with clear risk tiers and requirements. The US has no comprehensive federal law. Instead, it has an executive order trying to prevent states from regulating, combined with a patchwork of state laws with varying requirements and timelines.
Yes. Executive orders can be revoked by any subsequent president on day one. This is one reason legal experts emphasize that only congressional legislation creates durable AI policy. The December 2025 order replaced Trump's own earlier AI executive order from October 2023, which itself replaced the Biden administration's AI executive order.
Continue complying with existing state laws until courts rule otherwise. Monitor the March 11, 2026, FTC deadline. If you operate in Colorado, prepare for the June 30, 2026, enforcement date. If you serve customers in the EU, comply with the EU AI Act, which sets a higher bar than any US state law.
Under Secretary Emil Michael warns that Biden-era vendor restrictions on AI contracts threaten real-time combat planning, escalating the Anthropic ban into a full policy confrontation.
China's 15th Five-Year Plan (2026–2030) names AI in 50+ directives and designates quantum computing a strategic technology — a direct reply to US chip export controls.
While the DoD blacklisted Anthropic as a supply chain risk, Microsoft Azure and Google Cloud continue offering Claude to commercial enterprise clients — creating a two-tier AI reality.