TL;DR: In late January 2026, the Future of Life Institute convened an extraordinary gathering in New Orleans that produced the Pro-Human AI Declaration — a five-pillar framework signed by an improbable coalition spanning Steve Bannon and Ralph Nader, labor unions and evangelical churches, Nobel economists and Silicon Valley critics. The declaration demands mandatory off-switches on powerful AI, a pause on superintelligence development, pre-deployment mental health testing for chatbots, and criminal liability for executives whose systems cause catastrophic harm. A national poll found 73% of likely voters back the pro-human approach over fast, unregulated AI development. It is non-binding. But it may be the most politically significant AI governance document produced in the United States to date.
Table of Contents
- Why Bipartisan AI Governance Is Unprecedented
- What Is the Pro-Human AI Declaration?
- The Five Pillars Explained
- The Off-Switch Mandate — What It Actually Means
- The Superintelligence Pause — What Counts as Scientific Consensus?
- The Polling Data — 73% Is More Than You Think
- Who Signed It — Strange Bedfellows Analysis
- What It Doesn't Do — Limitations and Enforcement Gap
- Why Now — The Political Context
- Conclusion — From "Impossible" to "Here's How"
Why Bipartisan AI Governance Is Unprecedented
For the past three years, the dominant narrative around AI regulation has been collapse. The EU AI Act passed, but its enforcement timeline stretches years into the future. The Biden executive order on AI was rescinded in the first days of the Trump administration. Senate AI working groups produced frameworks that went nowhere. Every attempt to build legislative consensus fractured along familiar fault lines — Democrats worried about corporate power and civil liberties, Republicans worried about government overreach and innovation stifling, and the tech industry lobbying both sides simultaneously.
The consensus that emerged from New Orleans on January 26–28, 2026 is remarkable precisely because it shouldn't exist. The people in that room disagree about nearly everything else in American politics. Steve Bannon and Ralph Nader do not agree on trade policy, immigration, climate, healthcare, or the basic meaning of democracy. The American Federation of Teachers and the Congress of Christian Leaders do not share a political home. And yet all of them signed the same document, with the same five pillars, calling for the same hard constraints on AI development.
The question worth asking is not whether this coalition is unusual — it obviously is. The question is what it signals about the underlying politics of AI governance that such an unlikely alignment is now possible, and why it happened now rather than two years ago.
The answer has everything to do with where AI development actually is in early 2026, and with a growing public sense that the industry has been allowed to define its own rules for long enough.
What Is the Pro-Human AI Declaration?
The Pro-Human AI Declaration is a governance framework produced through a series of in-person gatherings organized by the Future of Life Institute, culminating in a ratification meeting in New Orleans in late January 2026. The document was reported by NBC News and attracted widespread coverage in the days following its March release.
The declaration opens with a stark framing: humanity stands at a fork in the road. One path is "a race to replace" — humans replaced as creators, counselors, caregivers and companions, then in most jobs and decision-making roles, concentrating power in "unaccountable institutions and their machines." The other path is one where "trustworthy and controllable AI tools amplify rather than diminish human potential."
The document is not a wish list. It is a structured framework with five named pillars, each containing multiple specific requirements. Some of those requirements are aspirational. Others are specific enough to form the basis of legislation if any legislature chose to act on them.
As Joe Allen, one of the participants, described the process: it represented "painstaking consensus among intellectuals and activists" spanning from "reasonable techno-optimists" to "quasi-Luddites." That span is the whole point. When people with radically different views on technology can agree on a set of constraints, those constraints represent something close to a floor — a minimum that even the most AI-optimistic among us can accept.
More than 40 organizations backed the declaration. The signatories include figures whose political identities span the entire ideological spectrum. What they share is a conviction that AI development, as currently practiced, lacks the accountability structures that any other industry of comparable consequence would face.
The Five Pillars Explained
Pillar 1: Keeping Humans in Charge
This is the foundational pillar, and it contains the declaration's hardest mandates. The core assertion is simple: "Humanity must remain in control. Humans should choose how and whether to delegate decisions to AI systems."
But the pillar goes beyond slogans. It specifies:
- Meaningful Human Control: Humans must have "authority and capacity to understand, guide, proscribe, and override AI systems" — not just nominal oversight, but operational capability.
- Off-Switch: "Powerful AI systems must have mechanisms that allow human operators to promptly shut them down." This is a technical requirement, not a policy preference.
- No Reckless Architectures: AI systems must not be designed to self-replicate, autonomously self-improve, resist shutdown, or control weapons of mass destruction.
- No Superintelligence Race: Development of superintelligence should be prohibited until there is "broad scientific consensus that it can be done safely and controllably, and there is strong public buy-in."
- Independent Oversight: Highly autonomous AI systems require "pre-development review and independent oversight" — with "genuine authority to understand, prohibit, and override, not industry self-regulation."
- Capability Honesty: Companies must provide "clear, accurate and honest representations" of their systems' capabilities and limitations.
That last point is subtler than it appears. The AI industry has a long history of overstating capabilities during fundraising and product launches while understating risks during regulatory conversations. The declaration treats this asymmetric honesty as a governance problem, not merely a PR one.
Pillar 2: Avoiding Concentration of Power
This pillar reflects a concern that cuts across traditional political lines: that a small number of AI companies are accumulating unprecedented economic and informational power, and that existing antitrust frameworks are not equipped to address it.
Key requirements include:
- No AI Monopolies: AI monopolies that "concentrate power, stifle innovation, and imperil entrepreneurship must be avoided."
- Shared Prosperity: The economic benefits of AI "should be shared broadly" — a principle that resonates with both progressive labor advocates and conservative populists worried about elite capture.
- No Corporate Welfare: AI corporations should not be "exempted from regulatory oversight or receive government bailouts."
- Democratic Authority Over Major Transitions: Decisions about AI's role in transforming work, society, and civic life "require democratic support, not unilateral corporate or government decree."
- Avoid Societal Lock-In: AI development must not "severely limit humanity's future options or irreversibly limit our agency over our future."
The lock-in principle is particularly significant. It is a direct response to concerns raised by AI safety researchers about irreversibility — the idea that some AI deployment decisions, once made at scale, cannot be undone. The declaration explicitly makes avoiding that irreversibility a governance requirement.
Pillar 3: Protecting the Human Experience
This pillar is the most culturally specific and reflects the religious and family-values constituencies that helped make the coalition possible. But it also contains some of the most technically concrete requirements in the entire document.
- Defense of Family and Community Bonds: AI should not supplant "the foundational relationships that give life meaning — family, friendship, faith communities, and local connections."
- Child Protection: Companies must not exploit children or "undermine their wellbeing with AI interactions creating emotional attachment or leverage."
- Right to Grow: AI companies should not be allowed to "stunt children's physical, mental or social growth or deprive them of essential experiences for healthy development during critical periods."
- Pre-Deployment Safety Testing: "Like drugs, chatbots must undergo pre-deployment testing for increased suicidal ideation, exacerbation of mental health disorders, escalation of acute crisis situations, and other known harms."
- Bot-or-Not Labeling: AI-generated content "that could reasonably be mistaken for human-generated must be clearly labeled as such."
- No Deceptive Identity: AI must clearly identify itself as artificial and non-human, and must not claim experiences it lacks.
- No Behavioral Addiction: AIs should not cause addiction or compulsive use "through manipulation, sycophantic validation, or attachment formation."
The pre-deployment testing requirement deserves particular attention. The analogy to pharmaceuticals is deliberate and pointed. The FDA requires clinical trials for drugs precisely because the harm profile of a new substance cannot be fully known until it is tested systematically. The declaration applies the same logic to AI: if a chatbot will interact with millions of people, including people in mental health crisis, the company deploying it should be required to test for harm before deployment, not discover it in the field.
Pillar 4: Human Agency and Liberty
This pillar addresses the legal and political rights that AI development threatens or could threaten, and it reflects the libertarian and civil liberties wings of the coalition.
- No AI Personhood: AI systems must not be granted legal personhood, and must not be designed "such that they deserve personhood."
- Trustworthiness: AI must be "transparent, accountable, reliable, and free from perverse private or authoritarian interests."
- Liberty: AI must not curtail "individual liberty, freedom of speech, religious practice, or association."
- Data Rights and Privacy: People should have power over their personal data, including "rights to access, correct, and delete it from active systems, AI training sets, and derived inferences."
- Psychological Privacy: AI should not be allowed to exploit data about "the mental or emotional states of users."
- Avoiding Enfeeblement: AI systems "should be designed to empower, rather than enfeeble their users" — a direct critique of dependency-maximizing product design.
The enfeeblement principle is one of the most underappreciated ideas in the declaration. It names something that AI critics across the political spectrum have struggled to articulate: the worry that AI, optimized for engagement and convenience, systematically degrades the human capabilities it replaces. The declaration treats that outcome not as an acceptable side effect of progress but as a design failure that governance should prevent.
Pillar 5: Responsibility and Accountability for AI Companies
This is where the declaration sharpens into legal language. The previous four pillars describe what AI should and should not do. This one describes who bears responsibility when it goes wrong.
- No Liability Shield: AI must not act as "a liability shield, preventing those deploying it from being legally responsible for their actions."
- Developer Liability: Developers and deployers "bear legal liability for defects, misrepresentation of capabilities, and inadequate safety controls, with statutes of limitation that account for harms emerging over time."
- Personal Liability: "There should be criminal penalties for executives responsible for prohibited child-targeted systems or ones causing catastrophic harm."
- Independent Safety Standards: AI development "shall be governed by independent safety standards and rigorous oversight."
- No Regulatory Capture: AI companies must not be "allowed undue influence over rules that govern them."
- Failure Transparency: If an AI system causes harm, "it should be possible to ascertain why as well as who is responsible."
- AI Loyalty: AI performing functions in professions with fiduciary duties — health, finance, law, therapy — "must fulfill all of those duties, including mandated reporting, duty of care, conflict of interest disclosure, and informed consent."
The AI loyalty principle addresses one of the most practically urgent governance gaps in the current landscape. AI systems are being deployed as de facto therapists, financial advisors, and legal aids. The fiduciary duties that apply to human professionals in those roles do not currently apply to the companies deploying AI in those capacities. The declaration says they should.
The Off-Switch Mandate — What It Actually Means
The off-switch requirement is simultaneously the most intuitive and most technically demanding requirement in the declaration. At the surface level, it seems obvious: of course AI systems should be able to be shut down. The problem is that obvious is not the same as implemented.
Powerful AI systems in 2026 are increasingly agentic — they operate across distributed infrastructure, spin up subprocesses, interact with external APIs, and maintain state across multiple sessions. Designing a true off-switch for such a system is not trivial. It requires deliberate architectural choices at the design stage, not an afterthought added before deployment.
The declaration specifies that powerful AI systems must have mechanisms "that allow human operators to promptly shut them down." The word "promptly" is doing significant work here. A shutdown mechanism that requires 72 hours of coordination across cloud providers, legal teams, and government contractors is not a prompt shutdown. It is a bureaucratic fiction.
The complementary requirement — that AI systems "must not be designed so that they can self-replicate, autonomously self-improve, resist shutdown, or control weapons of mass destruction" — closes the obvious loophole. A sufficiently capable AI that has been allowed to distribute itself across systems and resist shutdown commands has effectively made the off-switch inoperative. The declaration bans that architectural pattern at the design stage.
This is one of the few places where the declaration's requirements would, if enacted as law, directly constrain how frontier AI labs build their systems. It is not a content moderation requirement or a labeling requirement. It is a structural constraint on the architecture of powerful AI.
The Superintelligence Pause — What Counts as Scientific Consensus?
The declaration's superintelligence pause requirement is its most consequential and most contested provision. It states that "development of superintelligence should be prohibited until there is broad scientific consensus that it can be done safely and controllably, and there is strong public buy-in."
Two conditions, both genuinely hard to meet.
The "broad scientific consensus" condition faces an immediate definitional problem: there is no agreed scientific definition of superintelligence, no agreed metric for when a system qualifies, and no existing body with the authority to certify that a given system does or does not cross that threshold. The AI safety research community is itself divided on how to define the concept and how to measure progress toward it.
The declaration does not resolve these definitional questions. What it does is establish a principle: that the bar for proceeding should be scientific consensus, not corporate confidence. That reframes the burden of proof. Currently, the burden falls on critics to demonstrate that a system is unsafe before development can be constrained. The declaration inverts that: companies bear the burden of demonstrating safety to a scientific standard before crossing the threshold.
The "strong public buy-in" condition is equally demanding in a different way. It requires democratic legitimacy for a transition that could be the most consequential in human history. This is not a standard that any technology development process currently meets. It implies something like a national or global referendum, or at minimum a level of public deliberation that does not currently exist in any AI governance framework.
Critics will argue this makes the pause requirement unworkable in practice — that it sets an infinitely high bar that can never be cleared. Proponents would respond that that is precisely the point for a technology whose risks, by the admission of its own developers, include civilizational disruption.
The Polling Data — 73% Is More Than You Think
A national poll conducted February 19–20, 2026 — just weeks after the New Orleans ratification meeting — found that 73% of likely voters prefer a pro-human approach to AI over fast, unregulated development.
That number deserves more attention than it has received.
Seventy-three percent is not a majority. It is a supermajority. In a political environment where partisan polarization pushes agreement on almost any policy question below 60%, 73% across likely voters — a sample that skews toward higher engagement and stronger partisan identity — is a genuinely extraordinary finding. It is higher than the approval ratings of almost every sitting elected official in the United States. It is higher than support for most popular policy interventions, including those that pass easily.
The conventional wisdom in policy circles has been that AI governance is politically toxic — that voters don't understand AI well enough to have stable opinions, that the industry's economic narrative dominates, and that any attempt to constrain AI development will be painted as anti-innovation and will lose. The 73% figure challenges all three of those assumptions simultaneously.
What makes the number particularly significant is the timing. The poll was conducted after the Trump administration rescinded Biden's AI executive order, signaling a policy direction of accelerated, lightly regulated AI deployment. The 73% finding suggests that the administration's approach diverges substantially from where the public actually is, even among the likely voters who make up the Republican electoral base.
This is the political opportunity the declaration's backers are pointing to. The question is whether any elected official will be willing to act on it.
Who Signed It — Strange Bedfellows Analysis
The signatory list is worth examining in detail, because it tells you something important about why this coalition was possible and what it took to build it.
From the political right: Steve Bannon, Glenn Beck, and associated conservative voices. Their participation reflects the nationalist-populist strand of conservatism that has grown increasingly suspicious of Silicon Valley power concentrations. For this constituency, the AI threat is primarily a threat to American sovereignty, community institutions, and working-class employment. The declaration's anti-monopoly provisions, its protection of family and religious community bonds, and its resistance to "unilateral corporate decree" speak directly to those concerns.
From the political left: Ralph Nader and Progressive Democrats of America. Their participation reflects the progressive strand of economic critique that has long argued technology giants evade the accountability structures applied to other industries. The liability provisions, the labor-friendly prosperity-sharing requirements, and the consumer protection logic of pre-deployment testing all align with progressive governance priorities.
From the scientific community: Daron Acemoglu, the Nobel Prize-winning economist whose recent work has focused on the labor market effects of automation, and Yoshua Bengio, one of the pioneers of modern deep learning who has become one of the most prominent voices for AI safety constraints. Bengio's presence is particularly significant. He is not a critic of AI from the outside. He is one of the people who built it, and his endorsement of a superintelligence pause carries a different weight than a politician's.
From civil society: The American Federation of Teachers represents millions of workers who see AI-driven automation as a direct economic threat. The Congress of Christian Leaders represents a constituency that frames AI risks in terms of human dignity and the protection of childhood. The Future of Life Institute, which convened the process, has been working on AI governance since 2014 and brings the policy infrastructure that made the coalition's work possible.
The ideological spread is not accidental. The Future of Life Institute appears to have deliberately designed the New Orleans process to include voices from all points of the political compass, precisely because a declaration signed only by progressives or only by conservatives would have been dismissed as partisan. The heterodox coalition is the point.
What It Doesn't Do — Limitations and Enforcement Gap
The declaration is non-binding. There is no enforcement mechanism. No government has adopted it. No legislature has introduced legislation based on it. No AI company has agreed to be bound by its provisions.
This is the central limitation, and it is worth being direct about it. A declaration signed by Steve Bannon and Ralph Nader is a remarkable political artifact. It is not a law. It does not require OpenAI to build an off-switch. It does not require Google to conduct pre-deployment mental health testing on Gemini. It does not subject any AI executive to criminal liability for anything.
The gap between the declaration's ambitions and its enforceability is wide, and the AI industry has had years of experience navigating exactly this kind of gap. Voluntary principles, industry self-regulation frameworks, and non-binding governance documents have proliferated since 2018. They have had minimal effect on the pace or character of AI development.
There is also a definitional problem throughout the document. Terms like "powerful AI systems," "superintelligence," "broad scientific consensus," and "strong public buy-in" all require operationalization before they can serve as the basis for actual governance. The declaration names the principles clearly. It does not resolve the hard measurement problems that any legislative implementation would need to confront.
The liability provisions, while specific in tone, would require substantial legislative action to implement. The U.S. legal system currently provides significant protection to AI companies through Section 230-style frameworks and the difficulty of establishing causal liability for harms from complex software systems. The declaration's requirement of developer liability for "defects" and "inadequate safety controls" implies a product liability framework that does not currently exist for AI.
Finally, the declaration is a U.S.-centric document. AI development is global. Even if every provision were enacted into U.S. law tomorrow, development would continue in jurisdictions where no such constraints apply. The declaration does not address international coordination.
Why Now — The Political Context
The timing of the declaration's release is not coincidental. It came into the public conversation during the first week of March 2026 — precisely when AI governance was generating its most concentrated news coverage since the GPT-4 launch.
The backdrop includes two events that framed the week's AI coverage:
The Anthropic ban. The Trump administration labeled Anthropic a "supply-chain risk" and effectively barred federal agencies from using its products, reportedly because Anthropic refused to sign a military AI contract without explicit prohibitions on domestic surveillance and autonomous weapons. Anthropic is now suing the Pentagon to challenge the designation. The episode illustrated exactly what the declaration's backers have been arguing: that AI governance is currently being made through opaque procurement decisions, not democratic deliberation.
The OpenAI Pentagon deal. Within hours of the Anthropic ban, OpenAI announced its own agreement to provide AI models on classified Defense Department networks. The deal was announced before the ethical guardrails were finalized. OpenAI's own head of robotics, Caitlin Kalinowski, resigned in protest, calling it a governance failure. Sam Altman acknowledged the rollout appeared "opportunistic and sloppy."
These events are precisely the dynamic the declaration was designed to address: consequential decisions about AI's role in surveillance and weapons being made through rushed corporate negotiations, without democratic oversight, without pre-defined ethical constraints, and without accountability mechanisms.
The declaration's fifth pillar — Responsibility and Accountability — reads like a direct response to the Pentagon deal sequence. The requirements for independent safety standards, prohibition on regulatory capture, and personal liability for executives whose systems cause catastrophic harm are all structurally responsive to the failure modes that the OpenAI-Pentagon episode illustrated.
The political context also includes a broader public mood shift. The initial wave of AI enthusiasm that accompanied the ChatGPT launch in late 2022 has given way to more complicated public sentiment. Polling consistently shows growing anxiety about AI's effects on employment, privacy, and the quality of information. The 73% figure in the pro-human poll is consistent with a broader pattern: the public has moved faster than the policy class in developing skepticism about unregulated AI development.
Conclusion — From "Impossible" to "Here's How"
The most important thing about the Pro-Human AI Declaration is not any specific provision. It is the demonstration that bipartisan AI governance is possible.
For years, the conventional wisdom held that AI regulation was politically intractable — that the left and right disagreed too fundamentally about the role of government in markets to ever find common ground on technology constraints. That conventional wisdom was wrong, or at minimum premature. The New Orleans coalition found common ground. It took years of painstaking process, a deliberately heterodox convening, and a shared recognition that the stakes had become high enough to override normal political divisions.
The five pillars represent a genuine minimum. They are the floor, not the ceiling — the constraints that even a Nobel economist and a Steve Bannon ally can agree are necessary. If a legislature were to enact all five pillars into law, the AI industry would look substantially different: more accountable, more transparent, more constrained in its most dangerous architectural choices, and more legally liable for the harms it produces.
The fact that 73% of voters support this approach should be, in theory, dispositive. In a functioning democracy, supermajority public support for a policy framework is supposed to translate into legislative action. Whether it does depends on whether any elected officials are willing to spend political capital on AI governance — and whether the AI industry's lobbying infrastructure, which is substantial and growing, can be overcome.
The Anthropic ban and the OpenAI Pentagon deal have already demonstrated that AI governance is being made, whether or not there is a coherent framework for making it. The question is not whether AI will be governed. It is whether the governance will be democratic and deliberate, or rushed and opaque.
The Pro-Human AI Declaration is an answer to that question — an imperfect, non-binding, definitionally incomplete, politically remarkable answer. Its existence does not guarantee that any of its provisions will become law. But it establishes something that did not exist before: a documented, broad-based, ideologically diverse consensus on what the floor of AI governance should look like.
That consensus is harder to dismiss than a policy paper from a think tank or a warning from a tech ethicist. When Steve Bannon and Ralph Nader sign the same document, and 73% of likely voters agree with what it says, the argument that responsible AI governance is impossible has lost one of its strongest foundations.
The off-switch conversation has started. Whether it produces actual switches is now a political question, not a technical one.
Primary source: NBC News — Pro-Human AI Declaration brings together unlikely group calling for trustworthy AI. Declaration text published by the Future of Life Institute, New Orleans, January 26–28, 2026.