August 2, 2026 is the EU AI Act's full enforcement deadline. For SaaS companies with AI-powered features, this is not a future concern — the conformity assessment process for high-risk AI systems is substantial and cannot be completed in weeks. If you have not started, the deadline is already close.
The first step in EU AI Act compliance is accurately classifying your AI system under the Act's risk framework. Misclassification — in either direction — creates either unnecessary compliance cost or serious enforcement exposure.
The EU AI Act creates specific obligations for general-purpose AI models — foundation models made available to third parties. If your platform exposes a general-purpose AI model via an API, you may have GPAI obligations including technical documentation, copyright compliance policies, and (for systemic risk models) model evaluations and incident reporting.
The EU AI Act applies to providers placing AI systems on the EU market and to deployers of AI systems in the EU — regardless of where the provider is based. If your SaaS product includes recommendation engines, automated decision-making, chatbots, scoring systems, or generative AI features serving EU users, the Act applies. Your specific obligations depend on how your system is classified under the four-tier risk framework.
Prohibited (banned entirely): subliminal manipulation, social scoring, real-time biometric identification in public spaces. High Risk: AI in employment, education, credit scoring, essential services, law enforcement — requires Conformity Assessment, technical documentation, and EU database registration. Limited Risk: chatbots and emotion recognition — requires transparency disclosures telling users they are interacting with AI. Minimal Risk: spam filters, recommendation engines — no specific AI Act obligations beyond existing law.
A Conformity Assessment is a mandatory evaluation demonstrating that a high-risk AI system complies with the EU AI Act's requirements — including technical documentation, risk management system, data governance, human oversight measures, accuracy and robustness testing, and post-market monitoring. For most high-risk AI systems, providers conduct self-assessment and issue an EU Declaration of Conformity.
The EU AI Act and GDPR create overlapping obligations for AI systems that process personal data — which is most AI systems. High-risk AI systems require both a GDPR DPIA and an AI Act Conformity Assessment. Training AI on personal data requires a GDPR lawful basis and AI Act data governance compliance. A single data incident can trigger stacked enforcement under both frameworks.
Under Article 50, providers of AI systems designed to interact with humans — including chatbots and virtual assistants — must ensure users are informed they are interacting with an AI system, unless this is obvious from context. This disclosure must be made at the beginning of the first interaction. Failure to comply constitutes a violation of the Limited Risk tier obligations, with fines of up to €15 million or 3% of global turnover.
Book a free consultation. We assess your situation, confirm scope, and provide a fixed-fee quote — with no commitment required.
Send an Enquiry