Regulation Comparison

EU AI Act vs DORA

How AI-specific regulation intersects with the financial sector's digital operational resilience framework

Quick Comparison

Side-by-side overview of key regulatory dimensions

Primary Focus
EU AI Act

Regulating AI systems across all sectors based on risk classification, from prohibited practices to minimal-risk transparency obligations

DORA

Ensuring digital operational resilience of financial entities by managing ICT risks, incidents, testing, and third-party dependencies

Scope
EU AI Act

All sectors: any provider, deployer, importer, or distributor of AI systems placed on the EU market or affecting people in the EU

DORA

Financial sector only: banks, insurers, investment firms, payment institutions, crypto-asset service providers, and critical ICT third-party providers

Risk Approach
EU AI Act

Four-tier system classifying AI systems by intended use: unacceptable risk (banned), high-risk (strict conformity requirements), limited risk (transparency), minimal risk (voluntary)

DORA

Proportionality-based: risk management requirements scale with entity size, risk profile, and criticality of ICT services, but all entities must meet baseline requirements

Assessment Requirements
EU AI Act

Conformity assessments for high-risk AI (self-assessment or third-party depending on category), fundamental rights impact assessments for deployers, and post-market monitoring obligations

DORA

Mandatory digital operational resilience testing programme including vulnerability assessments, scenario-based testing, and threat-led penetration testing (TLPT) for significant entities

Governance
EU AI Act

Quality management systems, risk management systems, technical documentation, human oversight mechanisms, and transparency requirements for AI providers and deployers

DORA

Management body must approve ICT risk strategy, receive regular ICT risk training, allocate budgets, and bear personal accountability for DORA compliance

Penalties
EU AI Act

Prohibited practices: up to EUR 35 million or 7% of global turnover; high-risk non-compliance: up to EUR 15 million or 3%; misinformation: up to EUR 7.5 million or 1.5%

DORA

Determined by Member States; for critical ICT providers, periodic penalties of up to 1% of average daily worldwide turnover per day of non-compliance (max 6 months)

Third-Party Provisions
EU AI Act

Value chain obligations: providers must ensure downstream deployers receive adequate information; importers and distributors have due diligence duties; general-purpose AI model providers face transparency and safety obligations

DORA

Dedicated ICT third-party risk framework: register of all ICT providers, mandatory contractual provisions, concentration risk management, and direct EU-level oversight of critical ICT providers via Lead Overseer

Key Differences

What sets these regulations apart

EU AI Act

The AI Act regulates technology type; DORA regulates a sector

The AI Act applies wherever AI systems are used, across every industry. DORA applies specifically to financial entities regardless of what technology they use. A financial institution using AI must comply with both: the AI Act for its AI systems and DORA for its overall ICT resilience.

DORA

DORA mandates continuous operational resilience testing

DORA requires ongoing testing programmes including vulnerability scanning, scenario testing, and threat-led penetration testing (TLPT) every three years. The AI Act requires pre-market conformity assessments and post-market monitoring but does not prescribe the same type of continuous operational testing regime.

EU AI Act

The AI Act bans certain AI practices outright

The AI Act prohibits AI systems deemed to pose unacceptable risks, such as social scoring by governments, real-time biometric identification in public spaces (with narrow exceptions), manipulation of vulnerable groups, and emotion recognition in workplaces. DORA does not prohibit specific technologies; it requires resilience regardless of the technology used.

DORA

DORA addresses ICT concentration risk

DORA uniquely addresses concentration risk, the danger of too many financial entities relying on the same ICT provider. It requires entities to assess whether outsourcing creates systemic dependencies and provides for EU-level oversight of critical ICT providers. The AI Act does not have equivalent provisions for AI provider concentration.

EU AI Act

The AI Act has the highest potential fines

The AI Act's maximum fine of EUR 35 million or 7% of global turnover for prohibited practices is the highest in EU regulatory history. DORA's penalty framework is less defined at the EU level, with most amounts left to national discretion except for critical ICT provider penalties.

Where They Overlap

Areas where both regulations share common ground

1

Both require robust risk management frameworks (the AI Act for AI systems and DORA for ICT systems), with financial institutions needing to integrate both into their enterprise risk management

2

Both impose governance obligations on senior management: the AI Act requires human oversight of AI systems, while DORA requires management body accountability for ICT resilience

3

Both address third-party risk: the AI Act through value chain obligations for AI providers, deployers, and distributors; DORA through its ICT third-party risk framework

4

Both require documentation and record-keeping: the AI Act mandates technical documentation and logging for high-risk AI, while DORA requires documentation of ICT risk management and incident reporting

Which Applies to You?

Common scenarios and which regulation takes precedence

You are a bank deploying AI for credit scoring and fraud detection

Both regulations apply. AI-based credit scoring is classified as high-risk under the AI Act, requiring conformity assessments, transparency, and human oversight. As a financial entity, DORA requires you to manage the AI system as part of your ICT risk framework, including resilience testing and third-party oversight if the AI is provided externally. Build an integrated compliance approach.

EU AI ActDORA

You are an AI startup selling solutions to companies outside the financial sector

The AI Act applies to your AI systems based on their risk classification. DORA does not apply since your clients are not financial entities. Focus on AI Act conformity requirements, particularly if your solutions fall into the high-risk category (e.g., AI for recruitment, education, law enforcement).

EU AI Act

You are an insurance company using AI to automate claims processing

Both apply. AI in insurance is considered high-risk under the AI Act, requiring conformity assessments, explainability, and human oversight. DORA requires the AI system to be part of your ICT risk management framework, with appropriate resilience testing and third-party risk management if using an external AI provider.

EU AI ActDORA

You are a payment institution that does not use AI-based systems

Only DORA applies. As a financial entity, you must comply with DORA's ICT risk management, incident reporting, resilience testing, and third-party oversight requirements regardless of whether you use AI. The AI Act does not apply if you do not develop, deploy, or use AI systems.

DORA

Frequently Asked Questions

Common questions about these regulations

Does the AI Act apply to AI systems used by financial institutions?
Yes. The AI Act applies across all sectors, including financial services. Financial institutions that develop, deploy, or use AI systems must comply with both the AI Act (for the AI system itself) and DORA (for the ICT infrastructure and operational resilience). AI used for credit scoring, insurance underwriting, and fraud detection is specifically listed as high-risk under the AI Act.
How should a bank's AI risk management relate to its DORA ICT risk framework?
The AI risk management system required by the AI Act should be integrated within the broader ICT risk management framework required by DORA. AI systems are a subset of ICT systems, so DORA's requirements for identification, protection, detection, response, and recovery apply to AI infrastructure. Additionally, AI-specific requirements from the AI Act, such as data governance, bias monitoring, and human oversight, must be layered on top.
If an AI system used by a financial institution fails, which regulation governs the response?
Both may apply depending on the nature of the failure. If the AI failure constitutes a major ICT incident (e.g., it disrupts critical financial services), DORA's incident reporting requirements are triggered. If the AI system is high-risk under the AI Act, the provider has post-market monitoring obligations and must report serious incidents to market surveillance authorities. Financial institutions should have incident response procedures that address both frameworks.
When do these regulations become fully applicable?
DORA has been fully applicable since January 17, 2025. The AI Act follows a phased timeline: prohibited AI practices apply from February 2, 2025, high-risk AI obligations from August 2, 2026, and full application by August 2, 2027. Financial institutions should already be DORA-compliant and prepare for the AI Act's upcoming milestones.
How can Reversa help with AI Act and DORA compliance?
Reversa tracks regulatory developments for both frameworks in real time, including AI Act implementing acts and DORA technical standards. For financial institutions using AI, the platform maps how AI Act requirements for specific high-risk use cases integrate with DORA's ICT risk management framework, helping you build a single governance structure that satisfies both regulations.

Align AI Act and DORA Compliance with Reversa

Financial institutions using AI face requirements from both frameworks. Reversa helps you integrate AI governance into your ICT resilience strategy from a single platform.

Read the Full Guides

Cookie Usage

We use analytical cookies to improve our website and your experience. For more information, visit our Cookie Policy.