EU AI Act preparatory obligations are now live — 150 days to full enforcement
March 1 marked the binding activation of EU AI Act preparatory obligations. Companies face fines up to 7% of global revenue. What you need to do now.
Whether you're looking for an angel investor, a growth advisor, or just want to connect — I'm always open to great ideas.
Get in TouchAI, startups & growth insights. No spam.
TL;DR: March 1, 2026 marked the binding activation of the EU AI Act's preparatory obligations. Any company deploying or developing AI systems that touch EU users now has until August 2, 2026 — 150 days — before full enforcement of high-risk AI requirements kicks in. The fines are real: up to 7% of global annual revenue for the most serious violations. Unacceptable-risk AI practices were already banned since February 2025. What changed March 1 is governance, literacy, and accountability obligations. This is not hypothetical regulatory risk anymore. The clock is running.
The EU AI Act entered into force on August 1, 2024. But "entered into force" and "in effect" are not the same thing. The regulation operates on a phased timeline, with different obligations activating at different points.
The first phase — banning outright prohibited AI practices — took effect February 2, 2025. That covered things like AI systems that manipulate people through subliminal techniques, social scoring systems deployed by governments, and real-time remote biometric identification in public spaces for law enforcement (with narrow exceptions). Those bans are already live.
March 1, 2026 activated a different layer. These are the preparatory obligations — the governance, literacy, and accountability requirements that organizations must have in place before the full high-risk compliance requirements hit on August 2, 2026.
Specifically, March 1 triggered:
The practical implication is that March 1 is not simply a compliance date you note and move past. It marks the start of demonstrable good-faith effort. When enforcement begins in August, regulators will look backward. Organizations that started their governance work after March 1 are in a materially weaker position than those who began before it.
"Preparatory obligations are not administrative formalities. They reflect a fundamental shift in how the EU views corporate accountability for AI systems — leadership is now personally accountable for AI governance failures." — The European Business Review
The entire EU AI Act is built around a four-tier risk classification. Getting this right is the foundation of your compliance strategy. Misclassifying a system — especially downgrading a high-risk system to limited-risk — is one of the clearest paths to regulatory exposure.
| Risk tier | Description | Examples | Status |
|---|---|---|---|
| Unacceptable risk | Prohibited outright — banned entirely | Social scoring, real-time biometric surveillance in public, manipulative subliminal AI | Banned since Feb 2, 2025 |
| High risk | Heavily regulated — full compliance required | Hiring/HR AI, credit scoring, educational assessment, critical infrastructure control, law enforcement tools, medical devices | Full enforcement August 2, 2026 |
| Limited risk | Transparency obligations only | Chatbots (must disclose AI interaction), deepfake-generating systems, AI-generated content labeling | In effect |
| Minimal risk | No specific obligations | Spam filters, AI-powered video games, basic recommendation systems | No requirements |
The high-risk category is where most enterprise exposure sits. The Act defines high-risk systems across two annexes. Annex I covers AI systems that are safety components of products already regulated under EU law — think AI in medical devices, aviation safety systems, or automotive controls. Annex II covers eight specific application domains regardless of product category:
If your AI system makes or materially influences decisions in any of these areas for EU users, you are operating a high-risk system. That classification does not shift because your company is headquartered outside the EU.
The GPAI distinction matters separately. A large language model accessed via API that underlies another company's product is subject to GPAI obligations at the model level, even if the downstream application is minimal-risk. The provider of the foundation model carries its own set of obligations that cannot be fully passed down to deployers.
The EU AI Act has explicit extraterritorial reach. The key rule is simple: if the output of your AI system is used in the EU, you are within scope.
This matters most for three groups.
Non-EU companies providing AI systems used in the EU. A US company that sells AI-powered software to European businesses, or that operates a consumer AI product accessible to EU residents, is subject to the Act. The fact that your servers are in Oregon and your headquarters is in San Francisco is legally irrelevant. Your designated EU representative becomes your regulatory contact point.
Non-EU companies deploying AI systems against EU individuals. If a US bank uses AI to score EU citizens' credit applications, it is subject to high-risk requirements. If a US tech company uses AI to screen applications from EU-based job candidates, the same applies.
EU companies using AI systems built outside the EU. If you are an EU enterprise and you deploy a US AI vendor's system for hiring decisions or credit scoring, you are the deployer under the Act. The vendor's non-EU status does not reduce your obligations. You inherit compliance responsibility for how the system is used.
The one significant carve-out is research and development. AI systems developed and tested exclusively within R&D environments, not deployed against real users, are exempt during development phases. But "research" cannot be used to shield what is effectively a production system with a research label attached.
For companies already complying with GDPR, the EU AI Act builds on similar principles of data governance and accountability. But it adds requirements GDPR does not cover: risk management systems specific to AI, technical robustness standards, and human oversight mechanisms designed for automated decision-making at scale.
The EU AI Act's fine structure is tiered, and the numbers are significant enough to warrant board-level attention.
| Violation category | Maximum fine |
|---|---|
| Deploying prohibited (unacceptable-risk) AI practices | €35 million or 7% of global annual turnover, whichever is higher |
| Non-compliance with high-risk AI requirements | €15 million or 3% of global annual turnover, whichever is higher |
| Providing incorrect or misleading information to authorities | €7.5 million or 1.5% of global annual turnover, whichever is higher |
The "global annual turnover" basis is the critical detail. Unlike some regulations that cap fines based on domestic or EU revenue, the Act uses total worldwide revenue. For a company with $10 billion in global revenue, a 7% fine is a $700 million exposure. For a $100 billion company, it is $7 billion.
The precedent from GDPR enforcement matters here. Early GDPR enforcement was slow, which led many companies to deprioritize compliance in the first years. But enforcement has accelerated significantly, and several companies have received nine-figure fines for what were characterized as structural governance failures rather than individual incidents. The EU AI Act's enforcement bodies have studied that trajectory. They have indicated they plan to move faster.
SME relief exists but is limited. Small and micro enterprises face lower administrative burdens for some procedural requirements, but the fine caps remain percentage-based. A company with €5 million in revenue faces a maximum fine of €350,000 for a prohibited AI practice. That is still business-ending for a small firm.
Member states establish their own market surveillance and enforcement authorities. The European AI Office, established within the European Commission, handles enforcement for GPAI models with systemic risk. National authorities handle everything else. This creates variation in enforcement intensity across member states, but the legal obligations are uniform across all 27.
August 2, 2026 is the date by which organizations must be in full compliance with high-risk AI requirements. At 150 days from March 1, this is not a lot of runway for organizations that have not started.
What full compliance actually requires for a high-risk AI system:
Risk management system (Article 9). A documented, ongoing process for identifying, analyzing, and mitigating risks throughout the AI system's lifecycle. This must be updated continuously as the system is used and new risks emerge. A one-time risk assessment at launch does not satisfy this requirement.
Data and data governance (Article 10). Training, validation, and testing datasets must meet quality criteria. Relevant biases must be examined and addressed. Data governance practices must be documented. This requirement has teeth: regulators can request access to documentation.
Technical documentation (Article 11). A comprehensive technical file covering the system's purpose, capabilities, limitations, development process, performance metrics, and testing results. This documentation follows the system throughout its lifecycle and must be kept current.
Record-keeping and logging (Article 12). High-risk AI systems must be capable of automatic logging of events. For systems involved in decisions affecting natural persons, logs must be retained for periods sufficient to allow post-hoc auditing. The logging requirement is often overlooked in initial compliance planning and can require significant engineering work to implement correctly.
Transparency and user information (Article 13). Deployers must provide users with information about the system's capabilities and limitations, and specifically about the logic involved in decisions affecting them. This is distinct from GDPR's explanation requirement — it operates at the system level, not the individual decision level.
Human oversight (Article 14). High-risk systems must be designed to allow human oversight. Humans must be able to monitor system operation, intervene, override, or shut down the system. For automated decision-making pipelines, this means genuine intervention capability, not nominal oversight theater.
Accuracy, robustness, and cybersecurity (Article 15). Systems must meet standards for accuracy, be resilient to errors and faults, and be protected against adversarial manipulation.
For organizations building high-risk systems, the engineering and governance work required to meet these standards is substantial. For organizations deploying third-party high-risk systems, the obligation shifts toward due diligence, contractual arrangements with providers, and maintaining sufficient documentation of their own deployment decisions.
The contrast between how the EU and the US are approaching AI regulation in early 2026 represents one of the starkest policy divergences in the history of technology governance.
The EU AI Act is comprehensive, binding, and risk-based. It applies uniformly across all 27 member states. Compliance obligations are specific. Enforcement bodies are established. The timeline is defined and unchangeable — it is law, not executive preference.
The US approach under the Trump administration is effectively the opposite. Executive Order 14179, signed in January 2025, revoked Biden's earlier AI executive order and directed federal agencies to prioritize AI development over AI restriction. In December 2025, a separate executive order created a DOJ AI Litigation Task Force specifically to challenge state-level AI regulations. The current federal posture is pro-industry, anti-regulation, and anti-state.
| Dimension | EU AI Act | US approach (2026) |
|---|---|---|
| Legal instrument | Binding regulation (EU Regulation 2024/1689) | Executive orders only (no federal AI law) |
| Risk classification | Four-tier system with defined categories | No federal classification system |
| Extraterritorial reach | Explicit — applies to any AI affecting EU users | Absent |
| Fines | Up to 7% of global revenue | No federal AI-specific fine structure |
| Enforcement body | EU AI Office + national authorities | DOJ AI Litigation Task Force (targets state laws) |
| State/member-state variation | None — uniform across 27 EU members | Significant — 300+ state AI bills pending |
| Timeline certainty | Legally fixed, no presidential reversal possible | Entirely reversible by next administration |
For multinational companies, this divergence creates a clear strategic implication: EU AI Act compliance is the global baseline. The EU requirements are stricter, more specific, and more legally durable than anything in the US regulatory environment. A company that builds its AI governance infrastructure to EU standards is, by definition, compliant with existing US state laws and positioned to absorb future US federal regulation as it develops.
The US is not moving toward a federal AI law in the near term. The political alignment required for comprehensive legislation does not currently exist. The EU regulatory framework, by contrast, is locked in. The question for US companies is not whether to comply but when and how.
There is no version of this that does not require active work. Here is a practical checklist organized by the August 2 deadline.
Yes, if your AI systems' outputs are used in the EU or affect EU users. The Act applies based on where the AI's effects land, not where the developer is incorporated. US companies with any meaningful EU customer base or EU employee population are within scope for the relevant provisions. Non-EU companies must designate an authorized EU representative.
A provider develops or places an AI system on the market. A deployer uses an AI system developed by someone else in a professional context. Most enterprises are deployers, not providers, unless they are building their own AI systems. Both roles carry distinct obligations — providers bear primary compliance responsibility for the system itself; deployers bear responsibility for how they use it.
It depends on what they do. AI systems used internally to make decisions about employees — performance evaluation, scheduling, promotion assessment — may be high-risk systems under the employment and worker management category in Annex II. Many organizations are surprised to find that HR tools they consider internal administrative software are actually subject to high-risk requirements.
Any AI system that is used to "make or materially influence" decisions in recruitment, selection, promotion, termination, or performance evaluation of employees or job applicants qualifies. This includes resume screening tools, interview assessment AI, skills gap analysis systems, and AI-driven workforce planning tools. "Materially influence" is broad — if the AI output shapes the decision, even as one input among several, the system is likely in scope.
Partially. Providers of AI systems sold to enterprise deployers have their own compliance obligations. But deployers cannot outsource their compliance responsibilities entirely. You remain responsible for verifying that the systems you use are compliant, for how you deploy them, for the transparency and oversight mechanisms you maintain, and for ensuring your use case matches the intended purpose the provider documented. Contractual compliance clauses are necessary but not sufficient.
The United Nations convenes its first International Scientific Panel on AI with 40 independent experts selected from 2,600 candidates, modeled after the IPCC climate science body.
Despite Trump's ban, Claude remains in active Pentagon use with a 6-month winddown while defense tech clients flee to OpenAI and Google alternatives.
Thousands march through London from OpenAI's offices to Google DeepMind to Meta HQ in the UK's largest anti-AI protest, amid growing global backlash over Pentagon contracts and job displacement.