EU AI Act vs DORA
How AI-specific regulation intersects with the financial sector's digital operational resilience framework
Quick Comparison
Side-by-side overview of key regulatory dimensions
Regulating AI systems across all sectors based on risk classification, from prohibited practices to minimal-risk transparency obligations
Ensuring digital operational resilience of financial entities by managing ICT risks, incidents, testing, and third-party dependencies
All sectors: any provider, deployer, importer, or distributor of AI systems placed on the EU market or affecting people in the EU
Financial sector only: banks, insurers, investment firms, payment institutions, crypto-asset service providers, and critical ICT third-party providers
Four-tier system classifying AI systems by intended use: unacceptable risk (banned), high-risk (strict conformity requirements), limited risk (transparency), minimal risk (voluntary)
Proportionality-based: risk management requirements scale with entity size, risk profile, and criticality of ICT services, but all entities must meet baseline requirements
Conformity assessments for high-risk AI (self-assessment or third-party depending on category), fundamental rights impact assessments for deployers, and post-market monitoring obligations
Mandatory digital operational resilience testing programme including vulnerability assessments, scenario-based testing, and threat-led penetration testing (TLPT) for significant entities
Quality management systems, risk management systems, technical documentation, human oversight mechanisms, and transparency requirements for AI providers and deployers
Management body must approve ICT risk strategy, receive regular ICT risk training, allocate budgets, and bear personal accountability for DORA compliance
Prohibited practices: up to EUR 35 million or 7% of global turnover; high-risk non-compliance: up to EUR 15 million or 3%; misinformation: up to EUR 7.5 million or 1.5%
Determined by Member States; for critical ICT providers, periodic penalties of up to 1% of average daily worldwide turnover per day of non-compliance (max 6 months)
Value chain obligations: providers must ensure downstream deployers receive adequate information; importers and distributors have due diligence duties; general-purpose AI model providers face transparency and safety obligations
Dedicated ICT third-party risk framework: register of all ICT providers, mandatory contractual provisions, concentration risk management, and direct EU-level oversight of critical ICT providers via Lead Overseer
Key Differences
What sets these regulations apart
The AI Act regulates technology type; DORA regulates a sector
The AI Act applies wherever AI systems are used, across every industry. DORA applies specifically to financial entities regardless of what technology they use. A financial institution using AI must comply with both: the AI Act for its AI systems and DORA for its overall ICT resilience.
DORA mandates continuous operational resilience testing
DORA requires ongoing testing programmes including vulnerability scanning, scenario testing, and threat-led penetration testing (TLPT) every three years. The AI Act requires pre-market conformity assessments and post-market monitoring but does not prescribe the same type of continuous operational testing regime.
The AI Act bans certain AI practices outright
The AI Act prohibits AI systems deemed to pose unacceptable risks, such as social scoring by governments, real-time biometric identification in public spaces (with narrow exceptions), manipulation of vulnerable groups, and emotion recognition in workplaces. DORA does not prohibit specific technologies; it requires resilience regardless of the technology used.
DORA addresses ICT concentration risk
DORA uniquely addresses concentration risk, the danger of too many financial entities relying on the same ICT provider. It requires entities to assess whether outsourcing creates systemic dependencies and provides for EU-level oversight of critical ICT providers. The AI Act does not have equivalent provisions for AI provider concentration.
The AI Act has the highest potential fines
The AI Act's maximum fine of EUR 35 million or 7% of global turnover for prohibited practices is the highest in EU regulatory history. DORA's penalty framework is less defined at the EU level, with most amounts left to national discretion except for critical ICT provider penalties.
Where They Overlap
Areas where both regulations share common ground
Both require robust risk management frameworks (the AI Act for AI systems and DORA for ICT systems), with financial institutions needing to integrate both into their enterprise risk management
Both impose governance obligations on senior management: the AI Act requires human oversight of AI systems, while DORA requires management body accountability for ICT resilience
Both address third-party risk: the AI Act through value chain obligations for AI providers, deployers, and distributors; DORA through its ICT third-party risk framework
Both require documentation and record-keeping: the AI Act mandates technical documentation and logging for high-risk AI, while DORA requires documentation of ICT risk management and incident reporting
Which Applies to You?
Common scenarios and which regulation takes precedence
You are a bank deploying AI for credit scoring and fraud detection
Both regulations apply. AI-based credit scoring is classified as high-risk under the AI Act, requiring conformity assessments, transparency, and human oversight. As a financial entity, DORA requires you to manage the AI system as part of your ICT risk framework, including resilience testing and third-party oversight if the AI is provided externally. Build an integrated compliance approach.
You are an AI startup selling solutions to companies outside the financial sector
The AI Act applies to your AI systems based on their risk classification. DORA does not apply since your clients are not financial entities. Focus on AI Act conformity requirements, particularly if your solutions fall into the high-risk category (e.g., AI for recruitment, education, law enforcement).
You are an insurance company using AI to automate claims processing
Both apply. AI in insurance is considered high-risk under the AI Act, requiring conformity assessments, explainability, and human oversight. DORA requires the AI system to be part of your ICT risk management framework, with appropriate resilience testing and third-party risk management if using an external AI provider.
You are a payment institution that does not use AI-based systems
Only DORA applies. As a financial entity, you must comply with DORA's ICT risk management, incident reporting, resilience testing, and third-party oversight requirements regardless of whether you use AI. The AI Act does not apply if you do not develop, deploy, or use AI systems.
Frequently Asked Questions
Common questions about these regulations