Your company sells products in the EU. You use AI to screen job candidates in Europe. Your AI system's outputs touch EU markets. You've got roughly 110 days to comply with the most prescriptive AI regulation ever enacted. And the compliance gap isn't a paperwork problem. It's an engineering problem.
---
On August 2, 2026, the high-risk AI system obligations under the EU AI Act (Regulation 2024/1689, the European Union's comprehensive framework for regulating artificial intelligence based on risk classification) become enforceable. That's the date when providers and deployers of AI systems classified as "high-risk" under Annex III must have fully operational risk management systems, technical documentation, conformity assessments, human oversight controls, and automatic logging infrastructure in place.
Not planned. Not budgeted. In place.
The prohibited practices ban (Article 5) and AI literacy requirements (Article 4) already took effect on February 2, 2025. General-purpose AI model obligations kicked in on August 2, 2025. Those were warmup rounds. August 2, 2026 is the main event, and it carries fines of up to EUR 15 million or 3% of worldwide annual turnover, whichever is higher.
Here's what I'd tell any CEO who thinks this is like GDPR: it isn't. You can't comply by updating a privacy policy and hiring a DPO. The EU AI Act requires changes to how AI systems are built, documented, monitored, and governed, and compliance demands engineering work, not just policy updates. If you haven't started, 110 days isn't enough time to do this well. But it is enough time to do the critical things that keep you out of the enforcement crosshairs.
Who's Actually in Scope? More US Companies Than You Think.
The EU AI Act reaches further than most US companies realize. Article 2(1) lays out three jurisdictional triggers, and the third one catches people off guard.
The obvious triggers: if you place an AI system on the EU market (sell it to EU customers), you're in scope. If you deploy an AI system within the EU (your EU subsidiary uses it), you're in scope.
Now the less obvious one. Under Article 2(1)(c), providers and deployers located outside the EU are subject to the AI Act whenever "the output produced by the AI system is used in the Union." That language is intentionally broad. Your US-based manufacturing company uses AI-driven quality control, and those products ship to EU buyers? The AI output is used in the Union. Your energy trading platform uses AI optimization that affects EU energy markets? In scope. Your HR team in New York screens candidates for a Berlin office using AI? In scope.
Think of it as GDPR's extraterritorial reach, applied to AI systems instead of personal data. And just like GDPR, many US companies won't realize they're covered until enforcement begins.
Which AI Systems Qualify as "High-Risk"?
Annex III of the EU AI Act lists eight categories of high-risk AI use cases. Three hit US companies in regulated industries hardest.
Critical infrastructure (Category 2). AI systems used as safety components in the management and operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating, and electricity. If you're in clean energy and using AI for grid management, predictive maintenance of turbines or solar installations, automated grid balancing, or energy demand forecasting that feeds into grid control systems, your AI likely qualifies. The AI doesn't have to be dangerous on its own. If it serves as a safety component within critical infrastructure, it's high-risk.
Employment and workforce management (Category 4). This one has the widest blast radius. AI systems used for recruitment, screening, promotion decisions, task allocation, performance monitoring, or termination decisions. Virtually every large US company with EU employees or EU job applicants that uses AI anywhere in the hiring or workforce management pipeline is caught here. Resume screeners, automated interview scoring, algorithmic performance ratings, AI-driven scheduling tools that affect work conditions: all of it.
Essential services (Category 5). AI systems used to evaluate creditworthiness, establish credit scores, or assess risk for life and health insurance pricing. If your company provides financial services, insurance, or B2B credit assessments that touch EU counterparties, pay attention. Credit scoring systems under Category 5(b) also trigger a mandatory fundamental rights impact assessment (Article 27) before deployment, even for private-sector deployers.
There's also a separate track. Article 6(1) covers AI systems embedded in products already regulated by EU harmonized legislation listed in Annex I, including the Machinery Regulation (2023/1230). If you're building AI into industrial equipment sold in the EU, that's a different compliance path with a later deadline (August 2, 2027), but the classification analysis should be happening now.
What Does Compliance Actually Require?
The obligations break down differently depending on whether you're a provider (the organization that developed or placed the AI system on the market) or a deployer (the organization that uses an AI system under its own authority). Most US companies are deployers, though some are both.
For Providers (the heavier lift)
Risk management system (Article 9). This isn't a one-time risk assessment. It's a continuous, iterative process running throughout the AI system's lifecycle. You need to identify and analyze foreseeable risks, adopt risk management measures, and test whether those measures actually work. In my experience, companies that treat this as a "do it once and file it" exercise get into trouble.
Technical documentation (Article 11). Detailed documentation drawn up before the system goes to market. Kept current. System architecture, data governance practices, training methodologies, performance metrics, known limitations. If you can't produce this on request, you can't demonstrate compliance. Full stop.
Automatic logging (Article 12). The system must automatically record events throughout its lifecycle, traceable to specific operational contexts. You can't bolt this on after the fact. It needs to be designed into the system from the start.
Human oversight (Article 14). The system must be designed so that humans can actually oversee it during use. That means human-machine interface tools that let the deployer understand outputs, interpret them correctly, and decide to override or reverse decisions. "A human reviews the output" doesn't cut it if the interface doesn't enable meaningful oversight.
Conformity assessment (Article 43). Before placing a high-risk system on the EU market, providers must complete a conformity assessment, a structured evaluation demonstrating that the AI system meets the regulation's essential requirements. For most Annex III systems, this is a self-assessment under Annex VI. But it's not a checkbox exercise. You need to demonstrate that your risk management, documentation, logging, and oversight systems actually meet the regulation's requirements.
EU authorized representative (Article 22). Non-EU providers must appoint an authorized representative established in the EU. This person carries real compliance responsibilities, not just a mailing address.
Registration (Article 49). Register the system in the EU's public database before market placement.
For Deployers (lighter, but real)
You'll need to put technical and organizational measures in place (Article 26(1)) so you're using the system according to the provider's instructions.
Assign human oversight to competent, trained, authorized individuals (Article 26(2)). Not just anyone. People who understand what the system does and can meaningfully step in.
Monitor the system's operation (Article 26(5)) and report to providers or authorities when you spot risks.
Retain automatically generated logs for at least six months (Article 26(6)), or longer if sector-specific law requires it.
Inform affected individuals (Article 26(11)) that they're subject to a high-risk AI system decision, with meaningful explanations.
And for certain deployers, a fundamental rights impact assessment (Article 27) before deployment. This applies to public bodies, private entities providing public services (for any Annex III high-risk system), and all deployers of credit scoring (5(b)) or insurance pricing (5(c)) AI systems.
Why Aren't Harmonized Standards Ready Yet?
Here's a complication that doesn't get enough attention.
The harmonized technical standards that companies would normally rely on for compliance aren't ready. CEN and CENELEC (the European Committee for Standardization and the European Committee for Electrotechnical Standardization, the bodies responsible for developing EU technical standards) have been developing these standards through Joint Technical Committee 21 (JTC 21). Following a harmonized standard would give companies a "presumption of conformity," meaning if you follow the standard, you're presumed to meet the regulation's requirements. That's how compliance typically works for EU product safety law.
But the standards are behind schedule. The original deadline was April 2025. It slipped to August 2025. As of late 2025, CEN/CENELEC adopted exceptional acceleration measures. Key standards like prEN 18286 (quality management systems for AI) were still in public enquiry through January 2026, with publication targeted for Q4 2026 at the latest.
Companies face the August 2, 2026 enforcement deadline without finalized harmonized standards to follow. That means compliance will require interpreting the regulation text directly, supplemented by EU AI Office guidance and existing frameworks like ISO/IEC 42001 (the international standard for AI management systems). It's doable. But it requires more legal and technical judgment than following a published standard would.
The EU AI Pact: A Signal, Not a Shield
The European Commission launched the AI Pact in September 2024 as a voluntary framework for companies to start putting AI Act provisions in place ahead of the deadlines. Over 230 companies signed on. But participation skews heavily toward large EU-headquartered companies. US mid-market industrial firms, the companies most likely to be caught by the critical infrastructure and employment categories, are largely absent.
The Pact's three core pledges (AI governance strategy, high-risk system mapping, and AI literacy promotion) are useful starting points. But voluntary pledges aren't governance programs. And here's the competitive angle most people miss: if your EU competitors and customers have already signed public commitments to AI Act compliance, showing up without a governance program creates a market credibility gap that goes beyond regulatory risk. In procurement conversations, AI governance readiness is becoming a qualifying criterion, not a differentiator.
What To Do in the Next 110 Days
You can't build a complete AI governance program in 110 days. But you can build the foundation and close the highest-risk gaps.
1. Inventory your AI systems this month. You can't classify what you haven't catalogued. Map every AI system your company builds, sells, or deploys, including third-party AI tools used by your teams. For each one, document what it does, what data it uses, who it affects, and whether its output touches the EU.
2. Run the Annex III classification analysis. For each inventoried system, determine whether it falls into a high-risk category. Focus on Category 2 (critical infrastructure), Category 4 (employment), and Category 5 (essential services) first. Document your classification reasoning. Regulators will ask for it.
3. Appoint an EU authorized representative if you're a provider. Article 22 requires this for non-EU providers. Start now; finding the right representative and executing the written mandate takes longer than you'd expect.
4. Prioritize logging and documentation for your highest-risk systems. Articles 11 and 12 require technical documentation and automatic logging that can't be created after the fact. If your AI systems don't currently log decisions in a traceable way, this is the engineering work that needs to start immediately. Your COO or CTO should own this workstream.
5. Design human oversight that actually works. Article 14 requires more than a human in the loop. It requires interfaces that enable humans to understand, interpret, and override AI outputs. For your highest-risk systems, evaluate whether your current oversight processes meet this bar.
6. Brief your board. If your company has EU market exposure and uses AI in any Annex III category, the board needs to know about the August 2 deadline, the fine exposure (up to 3% of global turnover), and the budget implications. This isn't a technology decision. It's a market access decision. Your CEO should be driving this conversation.
7. Start the conformity assessment process for provider-side systems. Even the self-assessment path under Annex VI requires documented evidence that your risk management, technical documentation, and oversight systems meet regulatory requirements. Start gathering that evidence now.
Frequently Asked Questions
Does the EU AI Act apply to my US-based company?
Yes, if your AI system's output is used in the EU. Under Article 2(1)(c), the EU AI Act applies to providers and deployers outside the EU whenever the output of their AI system is used in the Union. That covers US companies whose AI-driven products ship to EU buyers, whose AI tools screen candidates for EU roles, or whose AI optimization touches EU markets. You don't need an EU subsidiary to be in scope.
Which AI systems count as high-risk under the EU AI Act?
Annex III of the EU AI Act lists eight categories of high-risk use cases. The three most relevant for US companies in manufacturing, energy, and enterprise tech are Category 2 (AI used as a safety component in critical infrastructure like power grids), Category 4 (AI used in employment decisions, from recruitment screening to performance monitoring), and Category 5 (AI used for credit scoring or insurance risk assessment). If your system falls into any of these categories, the full set of high-risk obligations applies by August 2, 2026.
Do I need to do anything right now, or can I wait for the harmonized standards?
You need to start now. The harmonized technical standards from CEN/CENELEC won't be finalized until Q4 2026 at the earliest, well after the August 2, 2026 enforcement deadline. Waiting for standards isn't a viable strategy. Companies should begin with an AI system inventory, run the Annex III classification analysis, and prioritize engineering work on logging and documentation for their highest-risk systems. Fines of up to EUR 15 million or 3% of worldwide annual turnover apply from day one.
What's the difference between a "provider" and a "deployer" under the EU AI Act?
A provider is the organization that develops an AI system or has one developed and places it on the market or puts it into service under its own name. A deployer is the organization that uses an AI system under its own authority. Providers carry heavier obligations (risk management systems, technical documentation, conformity assessments, logging infrastructure). Deployers have lighter but real duties: using the system per the provider's instructions, assigning trained human oversight, monitoring operations, and retaining logs for at least six months under Article 26(6). Most US companies are deployers.
How big are the fines for non-compliance?
For violations of the high-risk AI system obligations (Articles 6-49), fines can reach EUR 15 million or 3% of worldwide annual turnover, whichever is higher. For violations of the prohibited practices under Article 5, the ceiling is even steeper: EUR 35 million or 7% of global turnover. Each EU Member State will designate its own market surveillance authorities, so enforcement intensity may vary, but the penalty framework is on the books from August 2, 2026.
What We're Watching
EU AI Office guidance on Annex III classification. Interpretive guidance on which systems qualify as high-risk under each category is expected in mid-2026. This will matter enormously for borderline cases, but waiting for it is not a strategy.
CEN/CENELEC harmonized standards. The first harmonized standards under JTC 21 are targeted for Q4 2026. They'll provide clearer compliance pathways once published, but they won't be available before the August 2 deadline.
Member State enforcement posture. Each EU Member State must designate market surveillance authorities. How aggressively they enforce in the first 12 months will vary, just as it did with GDPR. But the fines are on the books from day one.
The Annex I / Article 6(1) deadline (August 2, 2027). If your AI is embedded in regulated products (medical devices, machinery, vehicles), you have an additional year. But the classification analysis and compliance planning should start now, not in 12 months.
The companies that handled GDPR well started 18 months before the deadline. Most companies dealing with the EU AI Act have given themselves less than 12. The difference? AI Act compliance requires engineering changes, not just policy changes. Start with the inventory, focus on the highest-risk systems, and build from there.
---
This article is for informational purposes only and does not constitute legal advice. Every company's situation is different, and you should consult with qualified legal counsel before making compliance decisions based on the developments discussed here.
If your company uses AI systems with EU market exposure and you're not sure where you stand on the August 2 deadline, that's the conversation to have with your outside general counsel now, not in July.