TL;DR: The AI industry has gone fully political. With $100 million in committed spending — part of a broader $300 million-plus war chest now mobilizing across multiple PACs and allied groups — Silicon Valley is no longer content to testify before congressional committees and hope for the best. It is buying seats at the table, and where necessary, the table itself. The 2026 midterms are shaping up to be the most consequential electoral battleground for technology policy in U.S. history, and every major player in the AI space has picked a side.
This is not a defensive maneuver. It is a coordinated, well-financed political offensive designed to lock in a regulatory environment favorable to rapid AI deployment, preempt state-level safety legislation, and ensure that the next Congress is populated with lawmakers who will vote accordingly. The scale of spending, the caliber of donors, and the sheer sophistication of the campaign infrastructure signal that AI's political coming-of-age is complete.
What You Will Learn
Leading the Future: The $100M Super PAC at the Center of It All
The centerpiece of the pro-AI electoral push is Leading the Future, a super PAC that was formed in the summer of 2025 and has since become the largest single vehicle for AI industry political spending in American history. The group closed out 2025 with $125 million raised and roughly $70 million still in its war chest heading into the active campaign season — a figure that dwarfs the previous single-cycle super PAC record set by the Club for Growth Action ($69 million in 2022).
The donor list reads like a who's who of Silicon Valley power. Marc Andreessen and Ben Horowitz, the co-founders of venture capital firm Andreessen Horowitz (a16z), each contributed $12.5 million personally. OpenAI co-founder and president Greg Brockman and his wife matched that figure with another $12.5 million combined. Palantir co-founder Joe Lonsdale, longtime Democratic tech donor Ron Conway of SV Angel, and AI web-browsing startup Perplexity are also among the backers.
The PAC's leadership is deliberately bipartisan in its composition, at least on paper. Republican operative Zac Moffatt, CEO of digital firm Targeted Victory, runs the operation alongside Democratic operative Josh Vlasto, a former press secretary to Senate Majority Leader Chuck Schumer and one-time aide to New York Governor Andrew Cuomo. That cross-aisle architecture was designed to give the group credibility with both parties — a calculation that, as we'll see, created serious political friction almost immediately.
Leading the Future's stated mission is to elect lawmakers who will pass a single federal standard for AI regulation that supersedes the patchwork of state laws now proliferating across the country. In practice, this means backing candidates who favor light-touch federal oversight and opposing any legislator who has championed mandatory safety requirements, liability frameworks, or emergency shutdown provisions for AI systems.
Innovation Council Action: The Trump-Aligned Counterpart
Running a parallel track is Innovation Council Action, a pro-Trump political nonprofit that announced plans to spend more than $100 million in the 2026 cycle. Where Leading the Future was structured to be bipartisan, Innovation Council Action is explicitly aligned with the Trump White House and its deregulatory AI agenda.
The group is led by Taylor Budowich, a former White House deputy chief of staff, and has received a public endorsement from David Sacks — the venture capitalist and podcast host who served as Trump's AI and crypto policy czar before stepping back from that formal role. The political footprint Sacks built during his White House tenure is now, in effect, being channeled into the electoral machinery of Innovation Council Action.
The group has opened a Washington, D.C. office, is building a lawmaker scorecard to grade members of Congress on their AI-related votes, and is explicitly positioning itself to reward allies and punish opponents within the Republican caucus as well as in competitive general election races. Unlike Leading the Future, which frames itself around "pro-innovation" bipartisanship, Innovation Council Action is unapologetically a MAGA-aligned operation — and it intends to use that alignment as leverage.
For more context on David Sacks's influence on federal AI policy and the vacuum his White House departure created, see our earlier coverage: David Sacks Exits as Trump's AI and Crypto Czar — and the Policy Vacuum He Left Behind.
The Total Picture: Industry Spending Tops $300 Million
Leading the Future and Innovation Council Action are not operating in isolation. When you add up all the committed and reported spending from AI-aligned political entities, the industry's total 2026 electoral investment now exceeds $300 million, according to reporting from Implicator AI and Quartz.
The broader ecosystem includes:
- Meta California and a separate American Technology Excellence Project, both backed by Meta, which is focusing significant resources on state-level races where AI-restrictive legislation is most likely to originate.
- Andreessen Horowitz's own political network, which has been funding anti-regulation candidates at the state and federal level since at least the SB 1047 fight in California.
- Smaller industry-aligned PACs from individual companies, including some funded by AI infrastructure firms that have a direct financial stake in defeating data center moratoriums and energy-related AI restrictions.
The crypto industry's success in the 2024 cycle — where crypto-backed PACs spent heavily, and AI-backed candidates won 10 out of 11 congressional primaries they entered — has clearly served as a proof of concept for AI's political playbook. The template is borrowed almost directly: identify the races, flood them with money, run sharp digital advertising campaigns, and shape the composition of the legislature before a single floor vote ever happens.
Candidate Strategy: Who Gets Backed and Who Gets Hit
The campaigns funded by Leading the Future and its allied groups are already targeting specific races with surgical precision.
On the support side, Chris Gober, a Republican congressional candidate running in Texas's 10th District, has become a marquee beneficiary of pro-AI spending. Gober has positioned himself as an AI innovation champion and has actively courted the tech industry's backing.
On the opposition side, Alex Bores, a Democratic candidate running for a Manhattan congressional seat, has become a primary target. Bores led the push for New York's recently adopted AI safety law, the RAISE Act, and for that alone he has found himself in the crosshairs of millions in negative advertising. The playbook here is explicit: make a visible example of a pro-regulation candidate in a high-profile race, send a message to every other legislator watching, and establish what the political cost of backing AI safety bills actually looks like.
Senator Marsha Blackburn (R-TN), notably, is expected to receive "thank you" advertising from groups on the regulatory side — though the pro-deregulation camp has its own list of incumbents it plans to support in Senate primaries and general elections.
The strategy mirrors what AIPAC has executed for decades in congressional races: create a credible enough threat of being primaried or outspent in the general election that most legislators rationally decide the safer course is to vote with the donors. Whether that calculus works for AI — a technology that polling suggests voters across the political spectrum remain skeptical of — remains to be tested.
The State-Level Bills This Money Is Designed to Kill
The political spending is not abstract. It is directly targeted at specific pieces of legislation that the AI industry views as existential threats to its operating environment.
The ghost that haunts this entire campaign is California's SB 1047, the AI safety bill that would have imposed liability on AI developers for harms caused by their systems and required emergency "kill switch" capabilities for frontier models. Governor Gavin Newsom vetoed the bill in September 2024 after intense lobbying from Anthropic, OpenAI, Google, Meta, and Andreessen Horowitz. The veto was a victory — but a narrow and politically uncomfortable one, and the coalition that pushed for the bill has not gone away.
Now a new generation of state-level bills is advancing. New York's RAISE Act — which requires AI safety protocols, incident disclosure requirements, and measures to prevent mass casualty risks — is the most prominent current target. California's Transparency in Frontier Artificial Intelligence Act is another. Pro-AI forces see a federal preemption strategy as the cleanest long-term solution: if Congress passes a single national standard that explicitly overrides state laws, the whack-a-mole problem of fighting bills in Sacramento, Albany, Austin, and Tallahassee simultaneously goes away.
President Trump has also issued an executive order directing the Commerce Department to explore using the Commerce Clause to preempt state AI regulations — a legally aggressive approach that, if upheld, would render the entire state-level regulatory push moot. A parallel threat: federal broadband grant funding could be withheld from states that implement AI regulations that conflict with federal policy. The electoral campaign and the executive strategy are designed to reinforce each other.
For deeper context on the Senate-side effort to restrict AI infrastructure buildout, see: Senate AI Data Center Moratorium and the Energy Grid Bill Explained.
White House Tension: When One PAC Went Off Script
The most revealing subplot of the entire campaign is the White House's furious reaction to Leading the Future — despite the fact that the PAC is ostensibly pushing a pro-AI, deregulatory agenda that aligns with the administration's stated goals.
The problem, from the Trump camp's perspective, is Josh Vlasto — the Democratic operative co-leading the PAC. The White House was explicit in its displeasure. "Any group run by Schumer acolytes will not have the blessing of the president," a White House official told NBC News, using language that left little room for diplomatic interpretation. A second official warned that "any donors or supporters of this group should think twice about getting on the wrong side of Trump world," adding that the administration was "carefully monitoring who is involved."
One White House source described the move by Andreessen Horowitz and OpenAI as "a slap in the face."
The episode reveals a real tension inside the pro-AI political project: the industry wants to present a bipartisan front in order to pass durable federal legislation that will survive a potential change in administrations. The White House, by contrast, wants to control the electoral machinery and ensure that pro-AI spending flows exclusively through Trump-aligned vehicles. That tension is being managed, somewhat awkwardly, by running two parallel PACs simultaneously — but it will become harder to contain as the 2026 cycle heats up and resource allocation decisions force the industry to choose sides.
The Counter-Coalition: AI Safety Money Enters the Race
The pro-AI political blitz has not gone unanswered. A counter-coalition is forming around AI safety and more cautious regulation — and it, too, is mobilizing significant resources.
Anthropic — which occupies an unusual position in the industry as a company that takes AI safety seriously while also being a commercial frontier AI developer — donated $20 million to Public First, a nonprofit founded by former U.S. Representatives Chris Stewart (R-UT) and Brad Carson (D-OK) to advance AI safety education and policy. "The AI policy decisions we make in the next few years will touch nearly every part of public life," the company said in a statement announcing the contribution.
Brad Carson, co-leading Public First, has pointed to polling that "consistently shows significant public concern about AI and overwhelming voter support for guardrails that protect people from harm" — framing the counter-campaign as the one actually aligned with voter preferences, even if it is outgunned in raw dollars.
The tension between Anthropic's $20 million on the safety side and its co-investors' $100 million-plus on the deregulation side is a microcosm of the industry's fundamental internal contradiction: companies that publicly acknowledge AI risk while simultaneously lobbying against the regulations that would require them to address it.
What This All Means for AI Regulation
Step back and the picture that emerges is both historically significant and deeply unsettling for anyone who believes that technology policy should be set by deliberative democratic processes rather than the financial leverage of a handful of very rich people.
The AI industry is following a playbook that the financial sector, the pharmaceutical industry, and the fossil fuel industry have each used at various points in American history: when regulation threatens profitability, mobilize politically before the regulatory consensus solidifies. The strategy often works — at least in the short term. But it also generates backlash, and the polling data here is a real warning signal. Across party lines, voters are more skeptical of AI than the industry would like to admit, and "more Republicans than Democrats favor regulation," according to surveys cited in industry coverage.
There is also the question of whether the federal preemption strategy is legally durable. Constitutional scholars are divided on whether the Commerce Clause approach outlined in Trump's executive order would survive judicial review. States have broad police powers, and AI safety could plausibly be framed as a public health and welfare matter that falls within traditional state authority. If the preemption strategy fails in the courts, the industry is back to fighting bill-by-bill in fifty state legislatures — and the electoral money spent to shape Congress may prove insufficient.
The $300 million being committed to the 2026 midterms is, in one sense, a measure of how much the industry believes is at stake. If federal AI legislation gets written by a Congress shaped by this spending, it will likely include a light-touch regulatory framework with federal preemption of state laws, limited liability exposure for AI companies, and no mandatory capability limitations or emergency shutdown requirements. That would be the outcome the industry is paying for.
Whether voters, state legislatures, and ultimately the courts will allow that outcome to stand is a different question — and one that will take years, not months, to answer.
Conclusion
The $100 million pro-AI political campaign — part of a larger $300 million industry mobilization — marks a fundamental shift in how Silicon Valley engages with democratic governance. No longer content to rely on technical credibility, op-ed persuasion, or congressional testimony, the AI industry is now trying to buy the political environment it wants before the regulatory window closes.
The immediate stakes are clear: the 2026 midterms will determine whether Congress passes a federal AI law that preempts state-level safety bills, or whether the state-level regulatory movement continues to gain momentum. The longer-term stakes are harder to calculate — but they involve nothing less than how one of the most consequential technologies in human history gets governed, by whom, and in whose interest.
The answer, increasingly, depends on who raises more money.