EU AI Act - Regulation on Artificial Intelligence (EU 2024/1689)
Complete guide to the EU AI Act. Understand AI risk classifications, obligations for providers and deployers, and how Reversa helps organizations achieve compliance.
Key Figures
Overview
What is this regulation?
The EU AI Act is the world's first comprehensive legal framework regulating artificial intelligence. Adopted in 2024, it establishes a risk-based approach to AI governance, classifying AI systems into four risk categories: unacceptable risk (banned), high-risk (heavily regulated), limited risk (transparency obligations), and minimal risk (largely unregulated). The regulation applies to providers, deployers, importers, and distributors of AI systems placed on the EU market or whose output is used in the EU. It introduces mandatory requirements for high-risk AI systems including risk management, data governance, technical documentation, transparency, human oversight, and accuracy and robustness standards. The Act also addresses general-purpose AI models, including foundation models and generative AI, with specific obligations for systemic risk models.
Who does it affect?
Organizations and roles impacted by this regulation
AI system providers who develop or commission AI systems for placing on the EU market, regardless of whether they are established in the EU.
Deployers (users) of AI systems within the EU, including businesses and public authorities that use AI systems in their operations.
Providers of general-purpose AI models, including foundation models and large language models, with enhanced obligations for models posing systemic risks.
Importers and distributors of AI systems in the EU market who must ensure compliance before making systems available.
Key Obligations
Core compliance requirements organizations must address
Risk Classification
Organizations must classify their AI systems according to the Act's risk categories. High-risk AI systems include those used in critical infrastructure, education, employment, law enforcement, migration, and justice administration.
Risk Management System
High-risk AI providers must implement a continuous risk management system throughout the AI system's lifecycle, identifying, analyzing, estimating, and evaluating risks, and adopting appropriate mitigation measures.
Data Governance
Training, validation, and testing datasets for high-risk AI must meet quality criteria including relevance, representativeness, freedom from errors, and completeness, with specific provisions for bias detection and mitigation.
Transparency and Human Oversight
High-risk AI systems must be designed to allow effective human oversight and must provide clear instructions for use. Users must be informed they are interacting with AI systems in certain contexts.
Technical Documentation
Providers of high-risk AI systems must prepare comprehensive technical documentation demonstrating compliance, including system architecture, development methodology, training procedures, and testing results.
Conformity Assessment
High-risk AI systems must undergo conformity assessment procedures before being placed on the market, with some categories requiring assessment by notified bodies.
Penalties for Non-Compliance
The EU AI Act establishes a tiered penalty structure. Violations related to prohibited AI practices can result in fines of up to 35 million euros or 7% of global annual turnover. Non-compliance with high-risk AI obligations carries fines of up to 15 million euros or 3% of global turnover. Providing incorrect or misleading information to authorities can result in fines of up to 7.5 million euros or 1.5% of global turnover. SMEs and startups benefit from proportionate penalty caps. Member states must also establish rules on penalties for other infringements and ensure enforcement mechanisms are effective, proportionate, and dissuasive.
Implementation Timeline
Key milestones and compliance deadlines
European Commission publishes the original AI Act proposal.
Political agreement reached between European Parliament and Council.
AI Act published in the Official Journal and enters into force.
Prohibitions on unacceptable-risk AI practices become applicable.
Rules for general-purpose AI models become applicable.
Full application of high-risk AI system obligations.
How Reversa Helps
Purpose-built tools for navigating this regulation with confidence
Regulatory Radar
24/7 monitoring of hundreds of official sources - the AI Office, national authorities, standardization bodies, and the EU Official Journal. Receive same-morning notifications when AI Act guidance, standards, or enforcement updates are published.
AI-Powered Analysis
Deep-dive regulatory impact analysis with sector-specialized AI agents that extract concrete AI Act obligations, risk classifications, and compliance requirements relevant to your organization.
Legislative Twins
Map AI Act obligations to your organization's specific context - creating digital representations of how the regulation's risk categories, prohibitions, and requirements affect your particular activities and AI systems.
Automated Reporting
Generate newsletters, compliance radars, and reports for committees and stakeholders automatically - keeping your team aligned on AI Act developments without manual effort.
Frequently Asked Questions
Common questions about this regulation