What is the EU AI Act and why does it matter for SaaS companies?
The EU AI Act is the world's first comprehensive regulation of artificial intelligence. It creates a risk-based framework that classifies AI systems into four tiers — Prohibited, High Risk, Limited Risk, and Minimal Risk — and imposes compliance obligations proportionate to the risk posed. For SaaS companies with AI-powered features serving EU users, August 2, 2026 is the full enforcement deadline for most provisions.
The Act applies based on where your users are located — not where your company is based. A SaaS company based in San Francisco with AI features serving EU users is subject to the EU AI Act in full.
The four-tier risk framework
Prohibited AI systems — banned from February 2025
Certain AI applications are prohibited entirely: subliminal manipulation techniques that exploit vulnerabilities; social scoring systems; real-time remote biometric identification in public spaces (with narrow exceptions); and AI systems that infer sensitive attributes (emotions, political views, sexual orientation) from biometric data. These prohibitions took effect February 2, 2025.
High-risk AI systems — the most demanding compliance tier
High-risk AI systems face the most significant obligations, including a mandatory Conformity Assessment before deployment. AI systems are classified as high-risk when used in these domains:
- Biometric identification and categorisation
- Management and operation of critical infrastructure
- Education — AI that determines access to educational opportunities
- Employment — AI used in hiring, performance evaluation, or promotion decisions
- Essential services — AI determining access to credit, insurance, or public benefits
- Law enforcement — AI used in risk assessment, polygraph testing, or evidence analysis
- Migration and asylum management
- Administration of justice
Limited-risk systems — transparency obligations
Limited-risk AI systems — including chatbots, AI-generated content tools, and emotion recognition systems — must satisfy transparency requirements. Under Article 50, users must be informed when they are interacting with an AI system, unless this is obvious from context. AI-generated images, audio, and video must be labelled as synthetic. These obligations take effect August 2, 2026.
Minimal-risk systems — no specific obligations
Spam filters, recommendation engines, search ranking algorithms, and similar AI systems fall into the minimal-risk tier and face no AI Act-specific obligations beyond existing applicable law (including GDPR where personal data is processed).
What the August 2, 2026 deadline means for your SaaS product
| AI System Type | Deadline | Primary Obligation |
|---|---|---|
| Prohibited systems | February 2, 2025 | Must have ceased EU deployment |
| High-risk systems (Annex I) | August 2, 2026 | Conformity Assessment; technical documentation; EU database registration |
| High-risk systems (Annex III) | August 2, 2027 | Same as above (one additional year) |
| Limited-risk systems | August 2, 2026 | Transparency disclosures to users |
| GPAI models | August 2, 2025 | Technical documentation; usage policies; copyright compliance |
What the Conformity Assessment process involves
For high-risk AI systems, the Conformity Assessment is not a tick-box exercise. It requires: comprehensive technical documentation of the AI system's design, training data, testing methodology, and performance metrics; a risk management system identifying, analysing, and mitigating known and foreseeable risks; data governance procedures covering training, validation, and testing data sets; human oversight measures ensuring meaningful human review of outputs; accuracy, robustness, and cybersecurity requirements; and post-market monitoring planning.
The documentation requirements alone are substantial. TECHLAWG advises on the legal compliance aspects of the Conformity Assessment and drafts the required transparency disclosures, terms, and user-facing documentation. See our EU AI Act Compliance service.
The GDPR + EU AI Act stacked compliance problem
Most SaaS companies with AI features face obligations under both GDPR and the EU AI Act simultaneously. High-risk AI systems that process personal data require both a GDPR Data Protection Impact Assessment and an AI Act Conformity Assessment. Training AI models on personal data requires both a GDPR lawful basis and AI Act data governance compliance. Failure under either framework is independently enforceable.
What you need to do now
- Inventory all AI features in your product — including third-party AI APIs you integrate
- Classify each system under the four-tier framework
- For limited-risk systems: draft and implement the required transparency disclosures before August 2, 2026
- For high-risk systems: begin the Conformity Assessment process immediately — it cannot be completed quickly
- Update your Terms of Service, Privacy Policy, and user notices to reflect AI Act obligations
- Review AI vendor agreements for EU AI Act compliance pass-through provisions
Frequently Asked Questions
What happens if I miss the EU AI Act deadline?
Non-compliance with the EU AI Act is subject to fines of up to €35 million or 7% of global annual turnover for prohibited AI violations; up to €15 million or 3% for other violations; and up to €7.5 million or 1.5% for providing incorrect information to authorities. Enforcement is by national AI supervisory authorities designated by each EU member state. The European AI Office provides coordination for cross-border and GPAI matters.
Does the EU AI Act apply if I use an AI API rather than building my own model?
Yes — the EU AI Act applies to "deployers" of AI systems, not just providers. If you integrate a third-party AI API (OpenAI, Anthropic, Google, etc.) into your product and make AI-powered features available to EU users, you are a deployer under the Act and must comply with the obligations applicable to deployers of that risk tier. For high-risk AI systems, deployers have significant obligations including conducting fundamental rights impact assessments.
What is a General-Purpose AI model under the EU AI Act?
A General-Purpose AI model (GPAI) is an AI model trained on large amounts of data that can perform a wide range of tasks. The EU AI Act creates specific obligations for providers of GPAI models made available to third parties via API — including technical documentation, copyright compliance policies, usage policies, and (for models with systemic risk, assessed at over 10^25 FLOPs training compute) more extensive obligations including model evaluations and incident reporting.
How does the EU AI Act affect my chatbot?
Chatbots and AI virtual assistants are classified as Limited Risk AI systems. From August 2, 2026, you must ensure users are clearly informed at the beginning of each interaction that they are talking to an AI — unless this is obvious from context. The disclosure must be made before or at the start of the interaction. Failure to make this disclosure is a violation with fines up to €15 million or 3% of global turnover.
Do I need to register my AI system in the EU database?
High-risk AI systems under Annex III of the EU AI Act must be registered in a publicly accessible EU database before deployment. Providers must submit: their name and contact details; a description of the AI system; the intended purpose and countries of deployment; the conformity assessment applied; and the EU Declaration of Conformity. The registration requirement applies to providers, not deployers. Registration must be completed before market placement.