TL;DR: We wrote about what the EU AI Act requires when its preparatory obligations went live on March 1, 2026. This is the follow-up: what actually happened in the first six days. The short version — major AI vendors were broadly prepared, SMB providers were not, and regulators spent week one sending signals rather than fines. The 150-day countdown to August 29 full enforcement is now running. What happens in weeks two through twenty-two will define whether the EU AI Act becomes the rigorous governance framework it was designed to be, or another regulation that outlasted the political will to enforce it.
What you will learn
- Week one in context: why the first week matters
- The compliance landscape: who had to file what
- Who came prepared: the big AI vendors on March 1
- Who is scrambling: the SMB and mid-market reality
- The enforcement reality: zero fines, many signals
- High-risk AI under the spotlight: first filings reveal everything
- What enterprises must do in the next 150 days
- The US-EU compliance gap and what it costs
- The enforcement trajectory question
Week one in context: why the first week matters
We covered the mechanics in detail when the preparatory obligations went live — the risk classification tiers, the GPAI transparency requirements, the August 29 deadline for full high-risk AI compliance, and the fine structure that tops out at €35 million or 7% of global revenue. If you need that foundation, that prior analysis remains the essential reference.
But regulatory law and regulatory enforcement are different disciplines. A statute that activates is not the same as a statute that bites. What week one of EU AI Act enforcement tells us is something the text of EU Regulation 2024/1689 never could: how the EU AI Office and the 27 national competent authorities intend to use their power.
That intelligence is worth more than another pass through the compliance checklist.
The first week of major regulation enforcement is almost always an observation period. Regulators watch who files, how they file, what they disclose, and — perhaps more importantly — who does not file at all. The pattern of voluntary compliance in week one tells enforcement bodies where to concentrate investigative resources in months two through twelve. A company that meets the March 1 deadline with substance signals a very different risk profile than a company that files nothing, or files a document clearly designed to appear compliant while revealing nothing.
Week one of the EU AI Act was no exception to this dynamic.
The compliance landscape: who had to file what
The March 1 deadline activated a specific set of obligations, not the full complement that will be required by August 29. Understanding what was actually due matters because it shapes what "compliance" meant on day one.
General-purpose AI (GPAI) model providers — the companies behind large language models including GPT, Claude, Gemini, and Llama — were required to have in place:
- Technical documentation covering their models' architecture, training data characteristics, and performance benchmarks
- Copyright compliance summaries detailing the training data policies they apply to third-party content
- Training data transparency summaries that, while not requiring full disclosure of proprietary training datasets, must describe the types of data used and the filtering processes applied
- Incident reporting mechanisms for models above the systemic-risk compute threshold — broadly interpreted as models trained using more than 10^25 FLOPs
For models designated as posing systemic risk — a category that currently covers the largest frontier models — additional requirements applied on March 1 including adversarial testing documentation and more granular incident response procedures.
Critically, these GPAI obligations sit at the model-provider level, not the deployer level. An enterprise that uses GPT-4 via API to power a customer service chatbot is not a GPAI provider. But OpenAI, which provides that model, is. The distinction matters for understanding who was in the regulatory crosshairs on March 1.
For companies deploying high-risk AI systems — healthcare diagnostics, hiring algorithms, credit scoring engines, critical infrastructure management — the March 1 obligations were primarily governance-oriented: activating AI literacy programs, establishing internal accountability structures, and demonstrating the beginning of the compliance infrastructure required before August 29. The full technical compliance requirements for high-risk deployers do not land until late August.
Who came prepared: the big AI vendors on March 1
The major frontier AI model providers had the longest runway and the most regulatory visibility, and the evidence from week one suggests they used it.
OpenAI, Anthropic, Google DeepMind, and Meta AI all entered March with compliance teams that had been operational for months. This was not last-minute legal scrambling. For each of these companies, EU AI Act compliance had been a board-level priority since at least mid-2025, when the March 1 deadline was close enough to generate executive attention.
OpenAI's approach reflects a pattern common to the large providers: a dedicated EU regulatory affairs function, separate from its general legal team, tasked specifically with GPAI compliance documentation. The company's training data transparency summaries and technical documentation were structured to satisfy the Act's requirements while revealing the minimum necessary about proprietary training methodology — a balance the Act explicitly permits. Regulatory filings do not require competitive disclosure; they require sufficient information for the EU AI Office to assess systemic risk.
Anthropic, which operates under a safety-focused mandate that aligns structurally with the EU AI Act's philosophy, was in some ways positioned more naturally for compliance than its peers. The company's existing practice of publishing model cards, capability evaluations, and safety assessments created a documentation infrastructure that overlaps substantially with GPAI transparency requirements. Translating that internal discipline into EU-specific regulatory formats was a material task but not a foundational one.
Google DeepMind submitted documentation covering its Gemini family of models. Given that Google operates across consumer, enterprise, and government markets in Europe, its GPAI compliance exposure is among the broadest of any single company. Meta's open-source approach to Llama created a different compliance wrinkle: when a model's weights are publicly released, who bears the provider obligations? The EU AI Office's emerging guidance on open-source models suggests the original developer retains compliance responsibility for the base model, regardless of what downstream fine-tuners do with it. Meta engaged with that question directly in its March 1 filings.
Microsoft's position is dual: as a major deployer of OpenAI models and as a provider of its own Azure-native AI services, it sits on both sides of the provider-deployer line. Its compliance infrastructure reflected that complexity.
The signal from week one among large providers: structured, substantial, and notably un-defensive. Companies that understand their regulatory exposure tend to produce filings designed to demonstrate good faith, not just technical compliance. The early read from EU AI Office sources cited in Reuters coverage of the March 1 activation was that the tier-one providers had submitted documentation of sufficient quality to avoid immediate investigative scrutiny.
Who is scrambling: the SMB and mid-market reality
The picture looks considerably different two tiers down.
SMB AI vendors — companies building vertical AI applications for specific industries, often without dedicated legal teams or regulatory affairs functions — were broadly underprepared for the March 1 obligations. The GPAI transparency requirements nominally apply to foundation model providers above certain thresholds, which means most SMB software companies are not GPAI providers. But many SMB vendors build and deploy applications that qualify as high-risk AI systems under Annex I or Annex II, and the March 1 governance preparation requirements applied to them regardless of company size.
Several patterns emerged among underprepared companies in week one.
The most common failure mode was misclassification: companies that had not rigorously applied the Act's risk criteria to their own products, and consequently had not identified that their hiring software, credit decisioning tool, or clinical support application qualified as high-risk. The classification exercise is not technically complex — the Act's criteria are specific — but it requires legal attention that many SMBs simply had not invested before March 1.
The second failure mode was geographic ambiguity. A US-based SaaS company selling workforce management software to European clients often did not have a clear view of whether the EU users affected by their AI-generated outputs were being "affected" in a manner that triggered the Act's extraterritorial reach. The threshold question — "does the output of my AI system affect EU users?" — is almost always yes for any company with European customers. But legal clarity on that question requires analysis, and many companies entered March without having done it.
The third failure mode was organizational: companies that understood the requirements in principle but had not assigned internal ownership. EU AI Act compliance is not purely a legal function, a technical function, or a product function. It requires all three working in coordination. Companies that treated it as a legal checkbox rather than a cross-functional governance project arrived at March 1 without the documentation, literacy programs, or risk management infrastructure the Act requires.
EU market newcomers — US and Asian AI companies that had been expanding into Europe in 2025 without building out local compliance infrastructure — were disproportionately represented in the unprepared category. The EU AI Act does not grant a grace period for market entry. A company that began selling AI services into France in January 2026 was subject to the March 1 obligations on the same timeline as a company that had operated in Europe for a decade.
The enforcement reality: zero fines, many signals
No enforcement actions were brought in week one. This was expected, and it was deliberate.
The EU AI Office, which handles GPAI enforcement centrally, and the national competent authorities across the 27 member states operate under a mandate that explicitly anticipates a transition period. Regulators understand that compliance is a process, not a single point-in-time event. Bringing fines in the first week of preparatory obligations would be legally premature — the Act requires establishing non-compliance before penalizing it, and that typically involves some form of engagement, information request, or investigation before formal enforcement action.
But "no fines" is not the same as "no activity."
The EU AI Office's first week was operationally dense. The office published its initial guidance on GPAI systemic risk classification, clarifying the compute threshold for systemic risk designation and providing the first official interpretation of what "adversarial testing" must include for compliant documentation. This guidance was not in the Act text itself — it came from the EU AI Office's delegated authority to provide implementing guidance, and companies that had been watching the regulatory signals built their March 1 filings around the anticipated guidance.
National competent authorities signaled enforcement priorities through the sectors they indicated would receive early scrutiny. Healthcare AI and financial services AI were consistently identified as priority areas — reflecting both their high-risk classification and the concentration of EU users affected by AI decisions in these domains. Employment AI received similar signals, particularly in Germany and France, where labor rights frameworks create a natural alignment between domestic employment law and the Act's hiring AI requirements.
The enforcement signal that will shape the next 150 days was implicit rather than explicit: regulators were watching voluntary compliance quality very closely. The quality of GPAI documentation submitted in week one will inform which companies receive detailed follow-up information requests before August 29. A company that submitted thin documentation on March 1 should expect inquiries in May or June — before the full enforcement clock runs out.
High-risk AI under the spotlight: first filings reveal everything
The most analytically rich data from week one came from the healthcare AI and hiring AI sectors, where the first formal system filings began to reveal what companies are — and are not — willing to disclose.
Healthcare AI providers submitting documentation under the Act's high-risk requirements were required to describe their systems' intended purpose, performance metrics, known limitations, and the human oversight mechanisms in place. The variance in disclosure quality was striking. Companies with mature clinical validation programs, the kind built for FDA 510(k) clearance or CE marking under the EU MDR, found the Act's documentation requirements familiar. Clinical evidence, limitation documentation, and validation study data translated directly into the formats the Act requires.
Companies that had built clinical AI products primarily for US markets, without the regulatory documentation requirements of the EU medical device framework, were in a materially weaker position. Their products often had robust performance data but insufficient limitation documentation and human oversight protocols — gaps that are easy to identify but not fast to remediate.
Hiring algorithm providers presented a different profile. The EU's employment law traditions, particularly in Germany and France, mean that workforce management AI was already subject to works council consultation requirements in many member states. Some of the EU's largest hiring AI deployments thus arrived at March 1 with more governance infrastructure than their US counterparts precisely because of pre-existing domestic labor law requirements.
Credit scoring AI filings revealed the tension the Act creates between regulatory transparency and commercial confidentiality. The algorithmic logic that determines credit decisions is simultaneously a compliance documentation requirement under the Act and a trade secret that lenders have historically protected vigorously. The Act permits proprietary methodology to be protected in public-facing documentation — the technical file requirement applies to regulators, not to the general public — but some lenders were testing exactly how that distinction would be applied in practice.
The high-risk filings from week one establish the baseline against which regulators will evaluate progress at August 29. A company that filed a minimally acceptable technical file on March 1 has signaled its approach. Regulators will be looking for material improvement by August, or the March 1 filing becomes evidence of a pattern of minimal engagement rather than good-faith compliance.
What enterprises must do in the next 150 days
Week one established the starting state. The countdown to August 29 is running. For enterprises operating high-risk AI systems in EU markets, the next 150 days divide into three concrete phases.
Immediate priority — audit and classify (March–April 2026). The single most common enterprise compliance failure is an incomplete AI system inventory. Most organizations that have not done a structured audit significantly undercount their AI exposure. The exercise requires looking beyond the AI tools formally procured through IT and legal — it includes AI features embedded in SaaS products, AI-enabled analytics in business intelligence platforms, and AI scoring in third-party services accessed by HR, finance, or operations. Every system that produces outputs that materially influence decisions affecting EU users must be in the inventory, and each must be classified against the Act's risk tiers.
Mid-term priority — build the documentation stack (April–July 2026). High-risk systems require technical documentation that does not exist in most enterprises today. The Article 11 technical file is not a standard IT security assessment or a GDPR data processing record — it is a system-specific document covering intended purpose, performance metrics, known limitations, testing results, and the development lifecycle. For each high-risk system an enterprise operates, this document must be created, maintained, and available to regulators on request. The logging and record-keeping requirements under Article 12 frequently require engineering work: systems that were not built to generate auditable decision logs need to be retrofitted or replaced.
Pre-August priority — human oversight mechanisms and vendor contracts (July–August 2026). The human oversight requirement under Article 14 is the most organizationally complex part of high-risk AI compliance. Demonstrating that humans can "understand, monitor, intervene in, or stop" AI system operation is not a policy statement — it is a process that must be documented, tested, and auditable. For enterprises that deploy third-party AI systems, vendor contracts must include specific representations about compliance status. Deployers cannot outsource their own compliance, but they can and must contractually require that providers maintain the technical documentation and compliance infrastructure the Act demands.
The US-EU compliance gap and what it costs
One of the clearest week-one signals was the divergence in preparedness between US companies with strong EU operations and US companies that treated Europe primarily as a sales geography.
Companies that had built EU compliance infrastructure for GDPR — dedicated EU data protection officers, robust data governance programs, EU-specific legal entity structures — arrived at the EU AI Act with institutional muscle memory for European regulatory engagement. The compliance culture, the documentation practices, the regulatory affairs relationships: these transferred. The EU AI Act is a materially different regulation, but the organizational capacity to comply with it looked similar.
Companies that had deployed AI products into EU markets while treating Europe as a GDPR checkbox exercise were, in week one, discovering a compliance gap they had not fully budgeted for. The EU AI Act requires more than data governance. It requires risk management systems, technical documentation, human oversight engineering, and AI literacy programs. Building all of that from scratch in 150 days is possible but expensive.
The practical implication of the US-EU compliance gap is increasingly clear in enterprise sales contexts. Large EU enterprises evaluating AI vendors are now asking compliance status questions as a procurement criterion. A US AI vendor that cannot demonstrate EU AI Act compliance infrastructure is at a disadvantage competing against providers who can. In regulated industries — financial services, healthcare, public sector — the compliance gap is becoming a market access question, not just a legal risk question.
The EU AI Act's extraterritorial reach means US companies cannot treat EU compliance as optional for their EU market segment. The fine structure — up to 7% of global annual turnover for the most serious violations — is calculated on worldwide revenue, not EU-derived revenue. A US company that generates 15% of its revenue from EU customers faces potential fines based on 100% of its global revenue. That asymmetry is not accidental. It is designed to make voluntary non-compliance economically irrational for any company with meaningful EU exposure.
The enforcement trajectory question
The first week of EU AI Act enforcement did not produce fines, dramatic regulatory actions, or high-profile investigations. It produced exactly what enforcement theorists would predict: a structured assessment period in which regulators categorized their targets, signaled their priorities, and established the baseline for what good-faith compliance looks like.
The GDPR comparison is instructive — and cautionary. GDPR enforcement was slow in its first two years. The absence of immediate fines created a market impression that GDPR was aspirational rather than operational. That impression proved wrong. GDPR enforcement accelerated significantly after 2021, and some of the largest fines came for companies that had taken the early enforcement slowness as a signal to de-prioritize compliance.
The EU AI Office's public communications since March 1 have been notably direct about this risk. In briefings to IAPP and in official publications, the Office has emphasized that the preparatory period is not a grace period — it is an observation period. Organizations that use the 150 days to build genuine compliance infrastructure will be in a fundamentally different regulatory position by August 29 than those that use the time to litigate whether the Act applies to them.
The systemic risk GPAI models face the earliest and most concentrated enforcement attention. The EU AI Office has direct jurisdiction over these models and a mandate to ensure that the largest, most capable AI systems operating in EU markets meet the Act's most stringent requirements. Week one suggested that the major frontier model providers understand this. Their documentation submissions were substantial enough to avoid immediate follow-up. The question for the next 150 days is whether the high-risk AI deployer tier — enterprises and mid-market companies operating hiring tools, credit scoring engines, and healthcare AI across EU member states — builds compliance at the pace the Act requires.
What happened in week one of EU AI Act enforcement is less important than what it signals about what comes next. Regulators who signal priorities clearly in week one are building the evidentiary record they will use to justify enforcement actions in months six through twelve. The compliance trajectory a company establishes before August 29 is not just a legal matter. It is a statement about how the company intends to operate in European markets, and regulators are reading it carefully.
The EU AI Act is now live. The first week was a baseline. What organizations do with the next 150 days is the real test.
Frequently asked questions
Were any companies fined in week one of EU AI Act enforcement?
No enforcement actions or fines were issued in the first week after the March 1, 2026 preparatory obligations activation. This is consistent with standard regulatory practice for new frameworks. Enforcement in the preparatory period focuses on establishing non-compliance through investigation before formal action, not immediate penalty imposition.
Which AI companies had compliance ready on March 1?
The major frontier GPAI model providers — OpenAI, Anthropic, Google DeepMind, and Meta AI — had compliance teams operational well before March 1 and submitted GPAI documentation meeting the Act's baseline requirements. Microsoft, as both an AI deployer and an AI services provider, also arrived with structured compliance infrastructure. The preparedness gap was concentrated among SMB AI vendors, EU market newcomers, and mid-market deployers of high-risk AI systems.
What does the EU AI Office's enforcement priority signal for 2026?
Week-one signals indicated that healthcare AI, financial services AI, and employment AI would receive early scrutiny from national competent authorities. The EU AI Office itself is prioritizing GPAI systemic risk compliance for frontier models. Companies operating in these sectors should assume regulatory engagement before August 29, not after.
How does the August 29 deadline differ from the March 1 obligations?
March 1 activated governance and literacy obligations: AI literacy programs, internal accountability structures, GPAI transparency documentation. August 29 is the full high-risk AI compliance deadline — requiring complete technical documentation stacks, logging and record-keeping systems, human oversight mechanisms, and risk management systems for all high-risk AI deployments. The March 1 period was about building the foundation. August 29 is when the structure must be complete.
What should US companies with EU operations do right now?
Start with an AI system inventory covering every AI tool that affects EU users. Classify each system against the Act's risk tiers — including third-party SaaS platforms with AI-enabled features. Identify your GPAI exposure (if you provide or fine-tune foundation models). Audit vendor contracts for EU AI Act compliance representations. And engage EU legal counsel to assess your compliance status before regulators engage you first.