Regulation Guide

EU AI Act - Regulation on Artificial Intelligence (EU 2024/1689)

Complete guide to the EU AI Act. Understand AI risk classifications, obligations for providers and deployers, and how Reversa helps organizations achieve compliance.

Key Figures

7%Max fine as % of global revenue
4Risk classification levels
2026Full application year
1stFirst comprehensive AI law globally

Overview

What is this regulation?

The EU AI Act is the world's first comprehensive legal framework regulating artificial intelligence. Adopted in 2024, it establishes a risk-based approach to AI governance, classifying AI systems into four risk categories: unacceptable risk (banned), high-risk (heavily regulated), limited risk (transparency obligations), and minimal risk (largely unregulated). The regulation applies to providers, deployers, importers, and distributors of AI systems placed on the EU market or whose output is used in the EU. It introduces mandatory requirements for high-risk AI systems including risk management, data governance, technical documentation, transparency, human oversight, and accuracy and robustness standards. The Act also addresses general-purpose AI models, including foundation models and generative AI, with specific obligations for systemic risk models.

Who does it affect?

Organizations and roles impacted by this regulation

1

AI system providers who develop or commission AI systems for placing on the EU market, regardless of whether they are established in the EU.

2

Deployers (users) of AI systems within the EU, including businesses and public authorities that use AI systems in their operations.

3

Providers of general-purpose AI models, including foundation models and large language models, with enhanced obligations for models posing systemic risks.

4

Importers and distributors of AI systems in the EU market who must ensure compliance before making systems available.

Key Obligations

Core compliance requirements organizations must address

01

Risk Classification

Organizations must classify their AI systems according to the Act's risk categories. High-risk AI systems include those used in critical infrastructure, education, employment, law enforcement, migration, and justice administration.

02

Risk Management System

High-risk AI providers must implement a continuous risk management system throughout the AI system's lifecycle, identifying, analyzing, estimating, and evaluating risks, and adopting appropriate mitigation measures.

03

Data Governance

Training, validation, and testing datasets for high-risk AI must meet quality criteria including relevance, representativeness, freedom from errors, and completeness, with specific provisions for bias detection and mitigation.

04

Transparency and Human Oversight

High-risk AI systems must be designed to allow effective human oversight and must provide clear instructions for use. Users must be informed they are interacting with AI systems in certain contexts.

05

Technical Documentation

Providers of high-risk AI systems must prepare comprehensive technical documentation demonstrating compliance, including system architecture, development methodology, training procedures, and testing results.

06

Conformity Assessment

High-risk AI systems must undergo conformity assessment procedures before being placed on the market, with some categories requiring assessment by notified bodies.

Penalties for Non-Compliance

The EU AI Act establishes a tiered penalty structure. Violations related to prohibited AI practices can result in fines of up to 35 million euros or 7% of global annual turnover. Non-compliance with high-risk AI obligations carries fines of up to 15 million euros or 3% of global turnover. Providing incorrect or misleading information to authorities can result in fines of up to 7.5 million euros or 1.5% of global turnover. SMEs and startups benefit from proportionate penalty caps. Member states must also establish rules on penalties for other infringements and ensure enforcement mechanisms are effective, proportionate, and dissuasive.

Implementation Timeline

Key milestones and compliance deadlines

Apr 2021

European Commission publishes the original AI Act proposal.

Dec 2023

Political agreement reached between European Parliament and Council.

Jul 2024

AI Act published in the Official Journal and enters into force.

Feb 2025

Prohibitions on unacceptable-risk AI practices become applicable.

Aug 2025

Rules for general-purpose AI models become applicable.

Aug 2026

Full application of high-risk AI system obligations.

How Reversa Helps

Purpose-built tools for navigating this regulation with confidence

Regulatory Radar

24/7 monitoring of hundreds of official sources - the AI Office, national authorities, standardization bodies, and the EU Official Journal. Receive same-morning notifications when AI Act guidance, standards, or enforcement updates are published.

AI-Powered Analysis

Deep-dive regulatory impact analysis with sector-specialized AI agents that extract concrete AI Act obligations, risk classifications, and compliance requirements relevant to your organization.

Legislative Twins

Map AI Act obligations to your organization's specific context - creating digital representations of how the regulation's risk categories, prohibitions, and requirements affect your particular activities and AI systems.

Automated Reporting

Generate newsletters, compliance radars, and reports for committees and stakeholders automatically - keeping your team aligned on AI Act developments without manual effort.

Frequently Asked Questions

Common questions about this regulation

When does the EU AI Act apply?
The EU AI Act entered into force in August 2024 and is being phased in gradually. Prohibitions on unacceptable-risk AI practices apply from February 2025. Rules for general-purpose AI models apply from August 2025. The full set of obligations for high-risk AI systems applies from August 2026. Certain provisions, including those for AI systems already on the market, have extended transition periods through 2027.
What AI practices are banned under the AI Act?
The AI Act prohibits: social scoring by governments, real-time remote biometric identification in public spaces for law enforcement (with narrow exceptions), AI systems that exploit vulnerabilities of specific groups, subliminal manipulation techniques, emotion recognition in workplaces and educational institutions, untargeted facial image scraping for database creation, and predictive policing based solely on profiling.
How does the AI Act affect general-purpose AI and foundation models?
Providers of general-purpose AI models (including large language models) must provide technical documentation, comply with EU copyright law, and publish summaries of training data. Models posing systemic risk face additional obligations including model evaluations, adversarial testing, cybersecurity measures, energy consumption reporting, and incident reporting. The AI Office oversees compliance for general-purpose AI models.
How can Reversa help with AI Act compliance?
Reversa helps organizations prepare for the AI Act through its Regulatory Radar (24/7 monitoring of AI Office publications, national authorities, and standardization bodies), AI-Powered Analysis (sector-specialized agents that extract concrete obligations from regulatory texts), Legislative Twins (mapping how AI Act requirements affect your specific activities and systems), and Automated Reporting (generating compliance radars and reports for stakeholders). The platform ensures you stay ahead as the regulation rolls out and new standards emerge.

Get Ahead of the EU AI Act with Reversa

From risk classification to documentation, navigate every AI Act obligation with confidence.

Related Regulations

Cookie Usage

We use analytical cookies to improve our website and your experience. For more information, visit our Cookie Policy.