EU AI Act Compliance Services

August 2, 2026 is the EU AI Act's full enforcement deadline. For SaaS companies with AI-powered features, this is not a future concern — the conformity assessment process for high-risk AI systems is substantial and cannot be completed in weeks. If you have not started, the deadline is already close.

Risk classification — where does your AI system sit?

The first step in EU AI Act compliance is accurately classifying your AI system under the Act's risk framework. Misclassification — in either direction — creates either unnecessary compliance cost or serious enforcement exposure.

  • Prohibited systems — banned from the EU market entirely from February 2025
  • High-risk systems — the most demanding compliance tier. Includes AI used in hiring, education admissions, credit scoring, essential service access, and law enforcement. Requires a Conformity Assessment before market placement.
  • Limited-risk systems — chatbots, AI-generated content, emotion recognition. Requires specific transparency disclosures to users.
  • Minimal-risk systems — spam filters, recommendation engines. No specific AI Act obligations.

General-purpose AI models (GPAI)

The EU AI Act creates specific obligations for general-purpose AI models — foundation models made available to third parties. If your platform exposes a general-purpose AI model via an API, you may have GPAI obligations including technical documentation, copyright compliance policies, and (for systemic risk models) model evaluations and incident reporting.

What TECHLAWG delivers for EU AI Act compliance

  • AI system inventory and risk classification assessment
  • Conformity Assessment support and technical documentation development
  • AI transparency disclosure drafting — privacy policies, terms, user notices
  • GDPR and EU AI Act integrated compliance planning
  • AI vendor agreement review and DPA updating
  • Post-market monitoring policy documentation

Frequently Asked Questions

Does the EU AI Act apply to my SaaS product?

The EU AI Act applies to providers placing AI systems on the EU market and to deployers of AI systems in the EU — regardless of where the provider is based. If your SaaS product includes recommendation engines, automated decision-making, chatbots, scoring systems, or generative AI features serving EU users, the Act applies. Your specific obligations depend on how your system is classified under the four-tier risk framework.

What are the four risk tiers under the EU AI Act?

Prohibited (banned entirely): subliminal manipulation, social scoring, real-time biometric identification in public spaces. High Risk: AI in employment, education, credit scoring, essential services, law enforcement — requires Conformity Assessment, technical documentation, and EU database registration. Limited Risk: chatbots and emotion recognition — requires transparency disclosures telling users they are interacting with AI. Minimal Risk: spam filters, recommendation engines — no specific AI Act obligations beyond existing law.

What is a Conformity Assessment under the EU AI Act?

A Conformity Assessment is a mandatory evaluation demonstrating that a high-risk AI system complies with the EU AI Act's requirements — including technical documentation, risk management system, data governance, human oversight measures, accuracy and robustness testing, and post-market monitoring. For most high-risk AI systems, providers conduct self-assessment and issue an EU Declaration of Conformity.

How does the EU AI Act interact with GDPR?

The EU AI Act and GDPR create overlapping obligations for AI systems that process personal data — which is most AI systems. High-risk AI systems require both a GDPR DPIA and an AI Act Conformity Assessment. Training AI on personal data requires a GDPR lawful basis and AI Act data governance compliance. A single data incident can trigger stacked enforcement under both frameworks.

What transparency obligations apply to AI chatbots under the EU AI Act?

Under Article 50, providers of AI systems designed to interact with humans — including chatbots and virtual assistants — must ensure users are informed they are interacting with an AI system, unless this is obvious from context. This disclosure must be made at the beginning of the first interaction. Failure to comply constitutes a violation of the Limited Risk tier obligations, with fines of up to €15 million or 3% of global turnover.

Ready to begin?

Book a free consultation. We assess your situation, confirm scope, and provide a fixed-fee quote — with no commitment required.

Send an Enquiry