TL;DR: President Biden signed the AI Accountability Act into law in March 2026, making it the first major federal AI regulation in US history. The law mandates annual third-party bias audits, public disclosure of audit results, and impact assessments for any company using AI in hiring, lending, healthcare, or criminal justice decisions. Non-compliance carries fines of up to $100,000 per day. Every company deploying AI in a "consequential decision" context must comply within 18 months.
What you will learn
- What the AI Accountability Act requires
- Which companies and use cases are affected
- Mandatory bias audit requirements and public disclosure
- How this compares to the EU AI Act
- Compliance timeline and enforcement mechanisms
- What this means for AI startups and enterprises
- Industry reactions and criticism
- How companies should prepare now
- Frequently asked questions
What the AI Accountability Act requires
The AI Accountability Act is not a guidance document or an executive order. It is a statute passed by Congress and signed into law, meaning it cannot be undone by the next administration without another act of Congress. That is the first thing every general counsel needs to understand.
The law was introduced as House Bill 2371 and passed with bipartisan support, 61 votes in the Senate and 294 in the House. President Biden signed it on March 18, 2026. The vote margins are significant: this is not a partisan measure that will be reversed the next time power changes hands.
At its core, the law does four things.
First, it defines "consequential decision" as any determination that meaningfully affects a person's access to employment, credit, healthcare, housing, education, or criminal justice outcomes. Any AI system making or materially influencing those decisions is a "covered AI system" subject to the law.
Second, it requires deployers of covered AI systems to commission annual third-party bias audits from accredited auditing firms. The audits must test for disparate impact across protected characteristics including race, sex, age, disability status, and national origin. Audit methodologies must be submitted to the National Institute of Standards and Technology (NIST) for review.
Third, it mandates public disclosure. Audit summaries must be published on a company's website and submitted to a centralized federal registry maintained by the FTC within 30 days of completion. The public registry goes live 90 days after the law's effective date.
Fourth, it creates an individual right of explanation. Any person subjected to a consequential AI-assisted decision has the right to request a plain-language explanation of how the AI system reached its conclusion, what data was used, and what their options are for human review.
HB 2371 goes further in one domain: it also addresses AI in divorce and family law arbitration proceedings, requiring courts to disclose when any AI system has been used to generate recommendations on custody, asset division, or support calculations. This provision was added by amendment during Senate markup and was not in the original House version.
"This is the moment the United States stops treating AI accountability as a voluntary exercise. The law creates binding obligations with real penalties. Companies that have been waiting for regulatory clarity now have it, and the clock is running." -- Reuters
Which companies and use cases are affected
The scope of the AI Accountability Act is broader than many companies initially assumed. "Covered AI system" is defined to include any automated system that uses machine learning, large language models, neural networks, or algorithmic scoring to generate a recommendation, score, classification, or decision that a human then acts on. The key phrase is "materially influences." You do not have to fully automate a decision to be covered. If an AI system's output shapes a human decision-maker's choice, you are in scope.
Here is how the four primary sectors break down.
Hiring and employment. Any AI resume screening, candidate ranking, video interview analysis, or workforce planning tool falls under the Act if its outputs influence hiring, promotion, pay, or termination decisions. This covers applicant tracking systems with algorithmic ranking, AI-powered reference checks, and so-called "culture fit" scoring models. The law specifically names automated video interview analysis as a covered use case following the controversy over tools like HireVue.
Lending and financial services. Credit scoring, loan underwriting, fraud detection, and insurance pricing models are all covered. This brings the Act into direct contact with the existing Equal Credit Opportunity Act and Fair Housing Act frameworks. Banks and fintechs already subject to ECOA will need to integrate their new AI audit obligations with existing compliance programs.
Healthcare. Clinical decision support tools, diagnostic algorithms, treatment recommendation systems, prior authorization AI, and patient triage tools are covered when their outputs affect patient care decisions. The law carves out pure research tools not deployed in clinical settings, but the line between "research" and "deployed" is already being tested by health systems using foundation models.
Criminal justice. Pretrial risk assessment tools, recidivism prediction algorithms, parole recommendation systems, and predictive policing tools are covered. This is arguably the most high-stakes application of the law given documented bias in tools like COMPAS, whose racial disparities were first exposed by ProPublica in 2016. Several states had already banned or restricted these tools. The federal law now sets a floor.
What is not covered. Internal process automation with no impact on individuals (manufacturing scheduling, inventory optimization), general-purpose AI tools not deployed in a consequential decision context, and AI systems used solely for national security purposes are excluded. Small businesses with fewer than 100 employees and less than $15 million in annual revenue have a three-year exemption with reduced audit requirements during the transition period.
Mandatory bias audit requirements and public disclosure
The audit requirements are detailed enough that they effectively define a new professional practice. The law does not specify every methodology, but it establishes minimum standards that NIST must operationalize within six months of enactment.
Who can conduct audits. Only accredited third-party auditors can conduct bias audits under the Act. NIST will publish accreditation criteria within 90 days. The criteria will require auditors to demonstrate technical competence in statistical testing, access to training data and model documentation, and independence from the audited company. In-house audits, even rigorous ones, do not count.
What audits must test. At minimum, audits must measure the selection rate, positive predictive value, and false positive/negative rates of covered AI systems across demographic groups. The standard benchmark is adverse impact ratio: if a protected class experiences selection at less than 80% of the rate of the highest-selected group, that constitutes a presumptive finding of bias requiring remediation. Audits must also examine the composition of training data and document whether demographic variables were used directly or as proxies.
Audit frequency. Annual audits are the baseline. If a company modifies its covered AI system significantly, defined as a change affecting more than 10% of weighted model parameters or decision boundaries, it must commission an expedited audit within 90 days of deployment.
Public disclosure. This is the provision generating the most industry anxiety. Audit summaries must include: the name and version of the audited system, the use case and decision types it affects, the demographic groups tested, the specific metrics measured and their values, any findings of bias, the remediation steps taken or planned, and the auditor's name and accreditation number. These summaries go into a publicly searchable FTC registry. Competitors, journalists, plaintiffs' attorneys, and regulators will all have access.
The disclosure requirement is modeled partly on New York City's Local Law 144, which since 2023 has required bias audits for automated employment decision tools used in NYC. The city's experience informed drafters of the federal Act. NYC's implementation revealed that many audit summaries were vague enough to be effectively uninformative. The federal law's specificity requirements are a deliberate attempt to prevent that outcome.
Right of explanation. Any individual who receives an adverse decision materially influenced by a covered AI system can request a written explanation within 45 days. The explanation must be in plain language, identify the AI system used, describe the factors that influenced the decision, and explain what recourse options are available including human review. Companies have 30 days to respond to explanation requests.
How this compares to the EU AI Act
The timing is notable. The EU AI Act's first substantive compliance obligations took effect in February 2025, and companies are now a year into navigating the preparatory requirements. The US federal law arrives in a very different political environment but lands on similar terrain.
The two frameworks share a core architecture: risk-tiered obligations with the strictest requirements for AI systems affecting fundamental rights and individual life outcomes. But the implementation differs substantially.
The EU AI Act is a comprehensive governance framework covering prohibited practices, high-risk systems, general-purpose AI models, and transparency obligations in a single regulation. It applies to AI developers and deployers, with obligations that cascade through the supply chain. Fines reach 3% of global annual turnover for most violations and 7% for prohibited practices.
The US AI Accountability Act is narrower in scope but more aggressive on the audit-and-disclosure mechanism. It does not attempt to classify AI systems comprehensively or ban specific use cases. Instead, it targets the outputs of AI systems in a defined category of consequential decisions and requires external verification that those outputs are not discriminatory. The mandatory public registry has no direct equivalent in the EU framework.
For multinationals operating in both jurisdictions, the overlapping but distinct requirements create dual compliance tracks. EU AI Act conformity assessments and US bias audits use different methodologies and cover different system characteristics. The same AI system may require two separate audit processes from two separate accredited bodies under two different frameworks. The coordination burden is real.
Compliance timeline and enforcement mechanisms
The law's effective date is September 18, 2026, exactly 18 months after signing. That sounds like a long runway. It is not.
The compliance timeline breaks into phases.
Days 1-90 (March 18 to June 16, 2026). NIST publishes auditor accreditation criteria. The FTC establishes the public disclosure registry infrastructure. NIST opens the notice-and-comment process for audit methodology standards.
Days 91-180 (June 17 to September 14, 2026). NIST finalizes audit methodology standards. The FTC begins accepting auditor accreditation applications. Companies must begin cataloging their covered AI systems and identifying which deployments require audit.
Day 180 to Month 12 (September 15, 2026, to March 18, 2027). Full compliance required. Covered AI systems must have completed initial bias audits. Audit summaries must be submitted to the FTC registry. Individual explanation rights become enforceable.
Month 12 to Month 18 (March 2027 to September 2027). Annual audit cycle begins. Companies that completed initial audits in the September 2026 window begin commissioning their second-cycle audits.
The small business exemption provides reduced audit requirements, not full exemption, for the first three years. Small businesses must still disclose that they use covered AI systems and provide explanations upon request. They are exempted from the annual third-party audit requirement until September 2029.
Enforcement. The FTC has primary civil enforcement authority with fines of up to $100,000 per day per violation. "Per violation" means per covered AI system per audit cycle. A company with 10 covered AI systems that fails to audit any of them faces potential fines of up to $1 million per day. The Department of Justice can bring criminal charges for willful violations. State AGs retain concurrent enforcement authority under their existing consumer protection and civil rights laws.
Critically, the Act creates a private right of action for individuals harmed by non-compliant AI systems. Any person who suffers an adverse consequential decision from a covered AI system that has not been audited, or whose audit revealed uncorrected bias, can sue for damages in federal court. Statutory damages of $1,000 to $10,000 per incident apply even without proof of actual harm. Class certification is explicitly available. The plaintiffs' bar was already mobilizing before the ink was dry.
What this means for AI startups and enterprises
The AI Accountability Act reshapes the AI product landscape in three ways that matter for every company building or deploying AI.
Audit market creation. The law immediately creates a mandatory, recurring, multi-billion dollar market for AI bias auditing services. NIST accreditation is the bottleneck. In the next 90 days, the auditing profession will effectively be created by federal decree. Established players in algorithmic auditing — companies like O'Neil Risk Consulting, Orcaa, and Parity — have a massive head start. The Big Four accounting firms are already building AI audit practices. Early accreditation is a competitive moat.
Liability exposure. Any company that sells AI software used in consequential decisions is now a potential defendant in enforcement actions or private litigation, even if they are not the direct deployer. The law is deployer-focused, but the right of explanation requirement will expose model documentation, training data practices, and model cards to legal discovery. AI vendors who have never documented their training data composition are about to discover why that matters.
Startup competitive dynamics. Startups in hiring tech, lending, healthcare AI, and legaltech face a fundamental choice. Build audit-ready products from day one, or build fast and retrofit compliance later. The 18-month timeline makes the second option risky. A startup that closes a Series B in June 2026 and ships a covered AI product in August has six weeks before the compliance deadline. "We'll add it later" is not a viable product roadmap.
Enterprise procurement shifts. Enterprise buyers of AI systems will now face direct liability for non-compliant vendor systems. This fundamentally changes procurement behavior. Requests for proposals will include audit documentation requirements. Vendor contracts will include audit compliance warranties and indemnification clauses. Companies that cannot produce clean audit reports will lose enterprise deals.
Model transparency pressure. The public disclosure requirement creates reputational risk that enforcement risk alone does not. A searchable FTC registry means that audit results for every major company's hiring algorithm, credit scoring model, and clinical AI tool will be publicly visible and comparable. Journalists, researchers, and advocacy groups will build dashboards comparing audit outcomes across companies and sectors. Competitive pressure to show clean audit results is arguably stronger than the threat of fines.
Industry reactions and criticism
The industry response splits sharply along predictable lines, with some unexpected positions.
Tech incumbents are cautiously supportive. Google, Microsoft, and IBM all issued statements supporting the legislation. Their calculation is straightforward: they have the compliance infrastructure to absorb audit costs that will be crippling for smaller competitors. IBM has been publishing algorithmic bias reports voluntarily for years. Microsoft's Responsible AI team has audit-ready documentation for its Azure AI products. The Act essentially codifies practices the largest companies already follow, while raising costs for challengers.
Fintech and HR tech are alarmed. Companies like Workday, SAP, Salesforce, and dozens of smaller HR tech vendors are in direct scope for hiring algorithm audits. The National Venture Capital Association's comment letter during Senate markup argued that the 18-month timeline is "functionally impossible" for companies that would need to engage auditors who do not yet exist, under methodology standards that have not been published. That argument did not carry the day, but the concern is legitimate.
Civil rights organizations are largely supportive but cautious. The NAACP, the Center for Democracy and Technology, and the Electronic Privacy Information Center backed the legislation but flagged gaps. The primary concern is enforcement capture: if NIST's audit methodology standards are drafted with industry input that weakens them, the audits become a compliance checkbox rather than a meaningful accountability mechanism. The public comment period for methodology standards is the next battleground.
Academic researchers are divided. The Algorithmic Justice League, Joy Buolamwini's organization that has documented bias in facial recognition and AI hiring tools, supports the law but argues the audit requirements should be more prescriptive. Some computer science researchers worry that standardizing audit methodology will entrench current measurement approaches that have known limitations, particularly for measuring bias in complex sequential decision systems.
The criticism from the left. Some privacy advocates argue the law does not go far enough. Requiring audits of discriminatory AI systems is not the same as prohibiting them. An audit that documents bias is not meaningful accountability if the company can continue deploying the system after paying a fine. The law has no provision for system bans or mandatory retirement of AI systems with persistent bias. You can be audited, found biased, publish that finding, and keep running.
The criticism from the right. Industry groups aligned with the Trump administration's position argue the law contradicts the federal preemption approach. If the executive order positioned state bias-mitigation mandates as "deceptive" because they compel AI systems to alter truthful outputs, how does a federal law mandating bias audits escape the same critique? The legal tension between the executive order's philosophy and the new statute is real, though courts will ultimately resolve it in favor of the statute.
How companies should prepare now
Eighteen months is enough time to comply if you start immediately. It is not enough time if you start in September 2026.
Step one: build your covered AI system inventory. You cannot audit what you cannot find. Every company deploying AI in the four covered sectors needs a comprehensive inventory of AI systems that affect consequential decisions. This includes third-party systems you license, fine-tuned foundation models you have customized, and legacy rule-based systems with ML components. The inventory should include the system name, version, vendor, use case, decision type affected, and current documentation status.
Step two: assess documentation gaps. The audit process requires documentation that many companies have never created. Model cards, training data provenance records, bias testing results from development, and human oversight protocols need to exist before an auditor arrives. Auditors will not create documentation for you. They will audit what exists. If documentation does not exist, auditors must note that finding, which becomes a public disclosure.
Step three: engage the NIST comment process. The methodology standards NIST develops in the next 90 days will determine what auditors look for and how findings are measured. Companies with technical AI teams should participate in the notice-and-comment process. The specific choice of adverse impact ratio thresholds, the treatment of intersectional demographic categories, and the standards for proxy variable analysis all have enormous practical consequences.
Step four: start auditor selection early. There are currently fewer than 50 firms globally with genuine expertise in AI bias auditing. Once NIST accreditation criteria are published, that number will expand, but accredited capacity will remain constrained through at least mid-2026. Companies that wait until Q3 2026 to engage auditors will find the market fully booked. Start conversations with auditing firms now, even before the accreditation framework is finalized.
Step five: review vendor contracts immediately. If your AI vendor cannot produce audit documentation and commit to bias audit compliance, you are inheriting their compliance risk. Review every AI vendor contract for audit-related warranties, indemnification provisions, and audit cooperation obligations. Renewals and new contracts should include explicit AI Accountability Act compliance representations.
Step six: build the explanation infrastructure. The right of explanation requirement is operational, not just legal. When individuals request explanations, companies need systems to identify which AI model influenced a specific decision, retrieve the relevant model version and inputs, and generate a plain-language explanation within 30 days. Companies without model versioning and decision logging infrastructure cannot meet this requirement. Building it requires technical work that takes months, not days.
For companies already working through EU AI Act compliance, the overlap is substantial. Documentation standards, impact assessment methodology, and human oversight protocols developed for EU compliance are directly applicable to US audit preparation. The investment is not duplicated, it is leveraged.
Frequently asked questions
Does the AI Accountability Act apply to AI systems that assist decisions, or only those that make them automatically?
Both. The law covers any AI system whose outputs "materially influence" a consequential decision by a human decision-maker. If your hiring manager uses an AI-generated resume score to rank candidates, even if they make the final call, the AI system is covered. The drafters intentionally used "materially influence" rather than "automate" to prevent the obvious workaround of inserting a nominal human in the loop.
Can a company's in-house AI team conduct the required bias audits?
No. The law explicitly requires third-party audits from accredited auditing firms. In-house assessments, even rigorous ones using the same methodologies, do not satisfy the requirement. This mirrors the logic of financial auditing: the value of the audit derives partly from the auditor's independence from the audited entity.
What happens if a bias audit finds significant discriminatory impact?
Finding bias triggers a remediation obligation, not an automatic system shutdown. Companies must publish the finding in their audit summary, submit a remediation plan to the FTC within 60 days, and complete remediation within 12 months. If a follow-up audit still finds significant bias, the FTC can impose daily fines and mandate a corrective action plan with court oversight. Repeated non-remediation can result in a prohibition order barring the company from deploying covered AI systems in the affected sector.
How does HB 2371's divorce arbitration provision work in practice?
Courts using any AI system to generate recommendations on custody, asset division, alimony, or child support calculations must disclose that use to both parties before any recommendation is presented. Parties have the right to request the underlying methodology and a human review of any AI-assisted recommendation. This applies to court-administered arbitration systems, not to private mediation services, though several states are expected to extend similar requirements to private arbitration through companion legislation.
What is the relationship between the AI Accountability Act and the Pennsylvania SAFECHAT Act?
They operate in different domains with complementary logic. The Pennsylvania SAFECHAT Act targets AI chatbot safety for children and focuses on disclosure, parental controls, and content safeguards. The AI Accountability Act targets consequential decision-making AI and focuses on bias audits and individual rights. A company deploying an AI tutoring platform in Pennsylvania might face obligations under both laws: SAFECHAT for the child safety dimensions and the federal Act if the platform makes educational placement or progression decisions. The federal law's child safety carve-out exempts it from preempting SAFECHAT-type state laws.
Sources: Reuters, The Hill, NIST AI Risk Management Framework, FTC Algorithmic Accountability, ProPublica Machine Bias, NYC Local Law 144