Skip to main content
EU AI Act: What US Businesses Must Do Now
AI Strategy|April 27, 202610 min read

EU AI Act: What US Businesses Must Do Now

The EU AI Act's August 2, 2026 deadline applies to any US business with EU customers. Fines reach 7% of global revenue. Here is the SMB survival guide.

Gabe KedingParker NewellLuke Keding

The OneWave Team

AI Consulting

The Fine Is 7% of Your Revenue. Most SMBs Have Never Heard of This Law.

A client called us in March asking about their AI tools. They run a 22-person HR software firm based in Tampa. Part of their customer base is in Germany and France. They use AI to screen job applications on behalf of those clients — an automated scoring layer built on a third-party API. When we asked if they had assessed their exposure under the EU AI Act, they had never heard of it.

That conversation is not unusual. 58% of small businesses now use generative AI, up from 23% in 2023. Many of those businesses have EU customers, process data from EU residents, or have integrated AI into workflows that land squarely in the regulated categories. The compliance clock has been running since 2024. The central deadline is August 2, 2026 — 97 days from today.

The penalties are not theoretical. Non-compliance fines reach €35 million or 7% of global annual revenue, whichever is higher. For a $5 million SMB, that is a potential $350,000 fine. For a $20 million firm, it is $1.4 million. This is not GDPR-style enforcement that mostly catches large companies years after the fact. Notified bodies are operational. The EU is not bluffing.

The EU AI Act does not stop at the EU border. If your AI touches an EU resident, it applies to you — regardless of where your business is incorporated, where your servers live, or how small your EU revenue share is.
European Parliament building representing EU regulation and AI governance

Does This Actually Apply to Your Business?

The threshold question most US businesses get wrong: they assume the AI Act only applies to European companies. It does not. The AI Act applies to any business whose AI system's output is used in the EU — regardless of where the business is headquartered. The trigger is not where you are. It is where your AI's effects land.

If any of the following apply, you are likely in scope. You have EU-based customers who interact with your AI products or services. You use AI to make decisions about EU residents — job applications, credit assessments, insurance pricing, access to services. You provide AI-powered tools to EU businesses. You process personal data from EU residents using AI systems, even in a purely analytical role.

The scope is broad by design. The EU's intent is to regulate the impact on EU residents, not to restrict itself to EU-incorporated entities. A US staffing firm that uses AI to screen candidates for European clients is in scope. A US SaaS company that uses AI to personalize recommendations for EU users is in scope. A US insurance broker that uses AI to generate quotes covering European policyholders is in scope. If you have any doubt, assume you are in scope and work backward from there.


The Four Risk Tiers — and Where SMBs Actually Land

The EU AI Act classifies AI systems into four risk categories. Your compliance obligations depend entirely on which tier your systems fall into. Understanding the tiers is the first step to understanding what you actually need to do.

Unacceptable Risk — Banned

These systems are prohibited outright and have been since February 2, 2025. They include AI that manipulates people through subliminal techniques, exploits vulnerabilities of specific groups, and certain real-time biometric identification systems deployed in public spaces. Most SMBs are not near this category. But if you are using AI for social scoring, behavioral manipulation, or predictive policing, stop immediately.

High Risk — Full Compliance Required

This is where most of the compliance work lives, and it is where many SMBs are accidentally caught. Annex III of the AI Act defines eight high-risk categories: biometrics, critical infrastructure, education and vocational training, employment and worker management, access to essential private and public services, law enforcement, migration and border control, and administration of justice. Before August 2, 2026, providers of high-risk systems must complete conformity assessments, prepare technical documentation, implement risk management systems, and register in the EU AI Act database.

Limited Risk — Transparency Obligations

AI systems that interact with people — chatbots, AI content generators, synthetic media — must disclose that they are AI. This is the category most SMBs using customer-facing AI fall into. The obligation is disclosure, not a full compliance assessment. If your AI talks to customers, tell them it is AI. Update your website copy, your chat interface, and your terms of service.

Minimal Risk — No Specific Obligations

The vast majority of AI applications — spam filters, recommendation engines, inventory optimization, internal productivity tools — fall here. No specific compliance requirements apply, though sound data governance practices are always advisable. If your AI is used only for internal operations and has no EU-facing customer interaction, this is likely your category.


Compliance checklist on a clipboard representing regulatory assessment process

The High-Risk Categories That Catch SMBs Off Guard

Most small business owners assume high-risk means military drones or airport facial recognition. The actual list is much broader, and several of the most common SMB AI use cases land squarely in it.

Employment and Hiring AI

Any AI system used to recruit, screen, rank, or evaluate candidates, assess performance, or make decisions about work allocation is classified as high-risk. This includes resume screening tools, interview analysis software, performance scoring algorithms, and AI-assisted promotion recommendations. We have written about the productivity case for AI for HR and recruiting — the efficiency gains are real. But if those tools touch EU-based candidates or employees, they require full high-risk compliance before deployment. The Tampa firm we mentioned above had exactly this exposure and did not know it.

Credit Scoring and Insurance Pricing

AI used to evaluate creditworthiness, set insurance prices, or determine eligibility for financial services is high-risk. Finance firms using AI to underwrite loans or generate insurance quotes for EU residents need conformity assessments and documented risk management procedures. The fraud detection carve-out is narrow — it applies only to systems whose primary function is detecting fraud, not scoring or pricing.

Healthcare AI

AI used to make or influence medical decisions — including scheduling, triage, prior authorization, and diagnostic support — is high-risk. Small practices using AI for healthcare admin with European patient data need to treat compliance as a pre-deployment requirement, not something to address after the fact.

Education and Vocational Training Tools

AI used to evaluate students, determine access to educational institutions, or assess performance in vocational training programs is high-risk. EdTech companies with EU users — even free-tier EU users — need to audit their AI features carefully before the August deadline.


What You Need to Do Before August 2

The compliance path depends entirely on your risk tier. Here is the actionable framework we walk clients through.

Step 1: Inventory Every AI System You Use

List every AI tool your business uses that affects people in any way. This includes third-party tools you license, AI features embedded in SaaS products, and custom-built systems. The shadow AI problem means this list is almost certainly larger than you expect. Employees using personal accounts for client work counts. AI features inside HR platforms, CRMs, and accounting tools count. Any AI API your engineering team has integrated counts.

Step 2: Determine Your EU Exposure

For each system on the list, ask: does this system's output affect EU residents? If yes, it is in scope. If no, document your reasoning and move on. Being able to demonstrate that you assessed your exposure is itself a compliance signal — regulators look for evidence of due diligence, not perfection.

Step 3: Classify Each In-Scope System

Use the four risk tiers to classify each in-scope system. The official EU AI Act compliance checker is a legitimate free tool — use it. For employment, healthcare, credit, or education AI, assume high-risk until you have documented evidence otherwise.

Step 4: For High-Risk Systems, Build Your Documentation

High-risk compliance requires four core outputs: a conformity assessment, technical documentation, a risk management system, and registration in the EU AI Act database. If you are using a third-party AI system in a high-risk category, your vendor should be supplying this documentation. If they cannot, that is a serious red flag. Our guide on how to evaluate an AI vendor covers the specific questions to ask about regulatory compliance before signing any contract.

Step 5: Implement Transparency Disclosures for All Customer-Facing AI

Even if your systems are minimal-risk, disclose to users when they are interacting with AI. This is a legal requirement for limited-risk systems and a baseline expectation for any customer-facing AI. Update your terms of service, privacy policy, and the copy in any AI-powered interface.


The Deadline Is a Forcing Function. Use It.

The businesses that treat the EU AI Act deadline as a pure compliance exercise will spend time and money without gaining much. The businesses that treat it as a full audit of their AI deployment will come out with cleaner workflows, better vendor agreements, and stronger data governance than they had going in.

Every client we walk through this process discovers at least two things they did not know: an AI feature inside a tool they already use that qualifies as in-scope, and a vendor data agreement that is either missing or inadequate. The compliance process is not purely regulatory — it is operational hygiene. Our deeper breakdown of AI data privacy for small businesses covers the broader data governance layer that the EU AI Act builds on. The two overlap significantly.

August 2, 2026 is the date the EU AI Act's core obligations become fully enforceable. That is not a projection. It is the date written into the regulation. If you have EU customers and AI systems that touch employment, healthcare, finance, or education decisions, you have less than 100 days. Start the inventory this week.

If you want a structured assessment of your AI portfolio's regulatory exposure, that is a service we run for clients as a focused two-week engagement. We start with the inventory, run each system through the risk tier framework, identify gaps in vendor agreements, and produce a remediation roadmap before the deadline. Reach out through our contact page to start the conversation.

Compliance is not the reason to audit your AI systems. The reason is that most businesses have no clear picture of what AI is doing on their behalf. The EU AI Act just gave you a deadline to find out.
EU AI ActAI compliance 2026EU AI regulationAI governance for businessAI compliance US businessAI risk managementEU AI Act deadlineOneWave AI
Share this article

Need help implementing AI?

OneWave AI helps small and mid-sized businesses adopt AI with practical, results-driven consulting. Talk to our team.

Get in Touch