New elements in the digital resilience framework for the financial sector

Just three to five years ago, most financial institutions ranked credit risk, liquidity risk, and market volatility as their top priorities. However, a profound shift has occurred. Today, ICT-related risks are not only operational concerns but have become major business risks, capable of causing systemic failures, reputational damage, and even the collapse of institutions.
This transformation is driven by full dependency on ICT providers (especially cloud service providers, core banking vendors, and outsourced AI systems), and growing sophistication and frequency of cyberattacks, including those seen in countries like Latvia, which have underscored the regional and sector-wide vulnerabilities.
Two key regulations now define digital resilience in this sector: the Digital Operational Resilience Act (DORA), which sets uniform ICT risk management rules for all EU financial entities, and the Artificial Intelligence Act (AI Act), establishing governance standards for high-risk AI systems used in finance.
The Digital Operational Resilience Act (DORA)
DORA (applicable from 17 January 2025) creates a single, comprehensive digital resilience framework for all financial entities across the EU, including banks, insurance companies, investment firms, crypto-asset service providers, payment institutions, and fintechs. It should also be noted that Latvia was among the proactive Member States that had already taken significant steps toward strengthening digital resilience before the enactment of DORA.
DORA is creating a common baseline of resilience while introducing broader obligations related to cross-border ICT third-party oversight, resilience testing, and management body accountability.
1. ICT risk management framework
At the heart of DORA is the requirement for every financial entity to establish a robust, comprehensive, and documented ICT risk management framework. This is not a technical exercise alone – it is a strategic responsibility placed on the management body (e.g. board of directors or executive leadership).
The major element is governance. The management body is ultimately accountable for overseeing and approving the risk management strategy, ensuring alignment with business objectives. Financial institutions must have a comprehensive and well-documented ICT risk management framework as part of their overall risk management framework, which is periodically reviewed and audited. Institutions must continuously monitor ICT risks, threats, vulnerabilities, and interdependencies across systems, thus identifying and assessing risks. The framework consists of five steps – identify, protect, detect, respond, and recover.
2. ICT-related incident reporting
DORA introduces a significant reform in how financial entities must handle and report ICT-related incidents, reflecting the growing urgency of cyber threats and operational disruptions in the sector. Under the regulation, institutions are now required to establish robust mechanisms for detecting, classifying, and reporting major ICT incidents to the competent supervisory authorities within strict timelines. These incidents must be evaluated using standardised impact criteria, such as the duration of the disruption, the number of customers or transactions affected, and any compromise to data integrity or confidentiality.
Importantly, DORA also introduces stronger enforcement tools. Recurrent or severe incidents can trigger deeper supervisory investigations, potentially leading to corrective measures, audits, or sanctions. This enhanced accountability mechanism creates strong incentives for institutions to invest in early detection, internal coordination, and post-incident learning processes. The framework is designed to provide a clearer, real-time picture of systemic vulnerabilities, improving coordination and response at both national and EU levels.
3. Digital resilience testing
DORA introduces a new and more proactive element into the financial sector's resilience planning: advanced testing of digital defences.
Currently, the financial entities must conduct periodic regular vulnerability assessments, scenario-based tests, and end-to-end evaluations of their critical ICT systems. For significant institutions, DORA requires threat-led penetration testing (TLPT) – a simulation of real-world cyberattacks executed under controlled conditions by certified testers. These tests are designed to uncover hidden vulnerabilities and help institutions test their detection, response, and recovery capabilities. Findings must lead to remediation actions and may be reviewed by regulators.
4. Managing the risk of ICT third-party service providers
Recognising the systemic reliance on third-party ICT providers – including cloud services, core banking platforms, and AI systems – DORA establishes structured, enforceable rules for managing outsourcing and external dependencies:
- Financial institutions must continuously assess the risk level of all ICT service providers, especially those deemed critical.
- Outsourcing contracts must clearly specify service level objectives, incident response expectations, data access rights, location of data processing, and exit strategies.
- Institutions and regulators must retain the right to access, audit, and inspect the ICT systems and premises of third-party providers.
- Each institution must maintain a standardised internal register of all ICT-related contracts, including criticality assessments and subcontracting chains.
DORA also empowers EU supervisory authorities to designate certain ICT providers as "critical", subjecting them to direct oversight by the European Supervisory Authorities.
The AI Act and its role in digital resilience in financial services
The Artificial Intelligence Act is the world's first broad regulation of AI, applying a risk-based approach that classifies AI systems into four categories: prohibited, high-risk, limited risk, and minimal risk. In financial services, many AI applications – such as credit scoring and loan approval algorithms, fraud detection systems, customer behaviour prediction tools, and AI-driven trading platforms or robo-advisors – are classified as high-risk. These systems are subject to stringent requirements concerning data governance, transparency, documentation, human oversight, and robustness.
AI systems are not inherently resilient and can introduce new risks including bias in credit decision models, opaqueness in decision-making (the so-called "black box" problem), over-reliance on automation without human fallback, and vulnerabilities to adversarial attacks or data poisoning. The AI Act enhances digital resilience by mandating that AI systems used in high-risk financial contexts be responsibly designed, developed, and deployed. Key obligations for financial institutions include ensuring data quality and traceability, allowing meaningful human intervention in critical processes, ensuring robustness and cybersecurity, maintaining thorough documentation and risk assessments, and continuously monitoring deployed AI to detect unexpected outcomes or misuse. These measures significantly bolster the resilience of financial institutions' decision-making processes and reduce risks of systemic errors or regulatory breaches caused by automated tools.
Similar to ICT systems, AI solutions are often sourced from external vendors. The AI Act imposes indirect obligations on users such as banks and fintechs to ensure that third-party AI systems comply with legal standards. Institutions must understand how the AI they use is built and maintained, include contractual provisions with AI vendors for access to system information and risk disclosures, and remain fully accountable for the consequences of AI-driven decisions, which cannot be outsourced.
What it means for financial institutions: a new operational mindset
First and foremost, digital resilience must become a strategic priority at the highest levels of financial institutions. What was once the domain of IT departments must now be fully integrated into enterprise risk frameworks and decision-making processes. DORA makes it explicitly clear that responsibility for ICT risk management lies with the management body. Senior leadership and boards of directors are now accountable for the institution's ability to withstand, respond to, and recover from ICT disruptions. This elevation of digital risk to the executive level represents a marked change in regulatory expectations – and institutions that fail to respond accordingly will be increasingly exposed to supervisory scrutiny and reputational risk.
In parallel, financial institutions will need to reassess and upgrade their internal governance structures. Risk management must be expanded to include dedicated ICT and AI governance, complete with clear roles across compliance, technology, legal, and business units. In firms deploying high-risk AI systems – such as those used in credit scoring, fraud detection, or robo-advisory services – this also includes setting up dedicated processes for oversight, documentation, and accountability. This is not merely about technical controls; it's about creating a transparent and auditable system of governance that spans from model design to decision outcomes.
Institutions must now continuously monitor for emerging ICT risks, cyber threats, and AI-related vulnerabilities. Incident detection and classification must become faster and more precise, and companies must be ready to report major ICT incidents to supervisory authorities within a tightly defined window. Similarly, they must maintain an up-to-date inventory of AI applications and third-party ICT providers, with constant oversight of performance, integrity, and contractual compliance. The traditional, static approach to risk management is no longer sufficient in a world of dynamic and interconnected threats.
This brings forward one of the most challenging aspects of the new regulations: third-party risk. Institutions must ensure that their outsourcing contracts meet new minimum standards, including audit rights, data access provisions, recovery requirements, and clear service levels. They must keep a standardised internal register of all third-party arrangements and be prepared to assess the criticality of these providers. Importantly, responsibility for risk cannot be offloaded to vendors. The financial institution remains fully accountable.
For many firms, TLPT testing is entirely new territory and will require new partnerships, resources, and skills. It also represents a cultural shift – from reactive defence to proactive resilience. Perhaps most fundamentally, financial institutions must now foster a culture grounded in trust, transparency, and preparedness. Under both DORA and the AI Act, documentation is essential: risk models, incident reports, AI decision logs, and vendor records must be maintained and made available to regulators. Equally, the ability to demonstrate ethical use of technology – particularly in AI systems affecting customers' financial lives – will become a competitive differentiator and a marker of credibility in the market.