Engineering Trust in the Age of DORA: Exit Strategy and Technical Sovereignty in the Artificial Intelligence Era

Trust in financial institutions has changed its foundation. Previously, regulators and customers relied on capital adequacy, liquidity buffers, and reputation. Today, however, digital operational resilience increasingly defines whether an institution deserves trust.

This shift explains why the European Union adopted the Digital Operational Resilience Act, Regulation (EU) 2022/2554. The regulation closes the era in which financial institutions could treat technology as a largely self-regulated domain. Regulators no longer accept disaster recovery plans that exist only in documentation. Instead, they expect institutions to prove, through engineering, that they can survive severe disruptions affecting third-party infrastructure.

At the centre of this regulatory shift stands Article 28, which governs information and communication technology third-party risk management. More specifically, Article 28(8) introduces a mandatory exit strategy for all services that support critical or important functions. As a result, technology procurement now requires architectural foresight. Institutions must design systems with the assumption that separation from a provider will eventually happen.

This requirement becomes especially demanding in the context of generative artificial intelligence. Banks, insurers, and financial technology companies increasingly depend on large language models, vector databases, and model-as-a-service platforms. Unlike traditional software ecosystems, the artificial intelligence ecosystem relies heavily on proprietary formats, closed models, and provider-specific semantics. Consequently, vendor lock-in risks increase dramatically.

This article analyses Article 28(8) through the lens of modern artificial intelligence system design. It explains why exit strategies fail in practice, identifies technical and legal barriers, and proposes engineering approaches that enable real operational resilience. The article targets technology leaders, risk executives, and architects responsible for long-term system sustainability.

Part One: The Digital Operational Resilience Act as a Governance Framework

Management Accountability Cannot Be Delegated

To understand exit strategies, one must first understand accountability under the Digital Operational Resilience Act. Article 5 makes this point explicit. The management body of a financial institution remains fully responsible for information and communication technology risk management. The institution cannot delegate this responsibility to suppliers, cloud providers, or artificial intelligence vendors.

Therefore, when a critical service fails because a third-party artificial intelligence provider experiences an outage, regulators hold the institution accountable. The provider’s fault does not absolve the institution of responsibility. This principle fundamentally reshapes third-party risk governance.

Third-Party Risk Becomes Internal Risk

Article 28 builds on this foundation. It requires institutions to treat third-party risk as part of their own risk profile. In practice, this means that when an institution integrates a large language model into a customer-facing or risk-related process, it also imports the operational, probabilistic, and infrastructure risks of the provider.

Because of this integration, institutions must adopt a formal information and communication technology third-party risk strategy. Senior management must review and approve this strategy regularly. Moreover, the strategy must explicitly address services that support critical or important functions.

Concentration Risk and the End of Single-Cloud Comfort

Regulators also focus strongly on concentration risk. Article 28 and the supporting technical standards encourage institutions to avoid dependency on a single provider. As a result, single-cloud strategies become increasingly difficult to justify for critical functions.

In the context of artificial intelligence, this guidance creates a direct architectural implication. Institutions must consider multi-vendor and multi-cloud designs early, even when a single provider appears technically or commercially attractive.

Part Two: Understanding Article 28(8) Exit Strategy Requirements

Exit Strategy as an Operational Capability

Article 28(8) does not describe exit strategies as theoretical rights. Instead, it frames them as operational capabilities. Institutions must demonstrate that they can exit a provider relationship without disrupting business activities or degrading service quality.

This requirement introduces four essential characteristics that every valid exit strategy must address.

Comprehensive Risk Coverage

First, an exit strategy must cover more than total provider failure. It must also address deterioration of service quality.

In artificial intelligence systems, quality degradation manifests in several ways. Latency may increase. Response accuracy may decline. Hallucination rates may rise. Model updates may alter behaviour in ways that invalidate compliance assumptions.

Therefore, institutions must define quantitative quality thresholds. They must also specify that breaching these thresholds triggers the exit process. Without clear metrics, institutions cannot demonstrate regulatory readiness.

Business Continuity as a Design Constraint

Second, exit strategies must preserve business continuity. Institutions must continue providing services during and after migration.

For real-time artificial intelligence systems such as fraud detection, credit decision support, or customer interaction platforms, this requirement eliminates slow recovery approaches. Cold backups and long rebuild times do not satisfy regulatory expectations.

Instead, institutions must design warm or hot fallback capabilities. These capabilities must allow rapid switching to alternative providers or internal deployments with minimal service interruption.

Data and Service Portability in Practice

Third, exit strategies must guarantee real portability of data and services. Article 28 explicitly requires institutions to remove services and transfer associated data securely and with integrity to alternative providers or internal systems.

This requirement creates friction with many artificial intelligence business models. Providers often rely on proprietary data formats, closed vector spaces, and limited export capabilities. Nevertheless, regulators expect institutions to overcome these barriers through architecture and contractual design.

The option to reintegrate services internally implies that institutions must maintain technical competence. They cannot rely exclusively on external providers for critical capabilities.

Mandatory and Recurrent Testing

Fourth, exit strategies must undergo regular testing. Institutions must simulate provider exit scenarios and document the results.

Importantly, testing must verify data completeness, service restoration, and quality preservation. Legal exit clauses alone do not meet this requirement. Only technical evidence demonstrates compliance.

Part Three: Why Artificial Intelligence Creates a New Form of Vendor Lock-In

From Infrastructure Dependency to Cognitive Dependency

Traditional vendor lock-in focused on infrastructure. Databases, operating systems, and virtual machines created migration friction. However, generative artificial intelligence introduces a deeper dependency.

When institutions integrate large language models, they rely on model behaviour, not just infrastructure. Prompts, responses, and safety controls depend on the internal characteristics of the model. Consequently, migration becomes a cognitive challenge rather than a purely technical one.

Lack of Universal Standards

Artificial intelligence ecosystems lack equivalents to Structured Query Language or standard data schemas. Each provider defines unique application programming interfaces, prompt formats, embedding models, and similarity metrics.

Because of this fragmentation, institutions cannot simply replace one provider with another. Migration requires changes to prompts, data pipelines, vector stores, and evaluation logic.

Part Four: The Three Layers of Lock-In in Artificial Intelligence Systems

Layer One: Model Behaviour and Inference

At the inference layer, lock-in arises from model-specific behaviour. Prompts that perform well on one model often fail on another. Safety filters, response structure, and reasoning patterns vary significantly.

As a result, institutions must refactor prompts and retest compliance controls during migration. Without a pre-validated alternative model, exit strategies fail in practice.

Layer Two: Semantic Embeddings and Vector Databases

At the semantic layer, vector databases introduce some of the strongest lock-in effects. Embeddings generated by one model do not transfer meaningfully to another embedding space.

Therefore, changing embedding providers requires complete re-embedding of the knowledge base. For large datasets, this process consumes significant time and compute resources. During re-indexing, retrieval quality often degrades, which conflicts with continuity requirements.

Layer Three: Context, Memory, and History

At the context layer, artificial intelligence systems accumulate value through interaction history and user profiles. Many software-as-a-service platforms store this data in proprietary formats.

Even when export exists, institutions often lose metadata that supports analytics, quality monitoring, and future model training. Consequently, customer experience suffers after migration.

Part Five: Engineering Approaches to Enable Exit Strategies

Multi-Cloud Architecture as a Regulatory Response

Because concentration risk threatens resilience, institutions increasingly adopt multi-cloud designs. In artificial intelligence systems, multi-cloud architecture requires more than data replication.

Institutions must design abstraction layers that allow inference across environments. They must also synchronise vector data between platforms. Although this approach increases engineering complexity, it provides the strongest protection against provider failure and regulatory non-compliance.

Part Six: Making Data Export Technically Feasible

Vector Database Export Challenges

Many vector database services restrict data extraction. For example, application programming interfaces often require known identifiers or limit query results.

Therefore, institutions must design around these limitations.

Shadow Indexing as a Core Control

Shadow indexing provides the most reliable solution. Each vector insertion must also persist identifiers and metadata in institution-controlled storage. This design guarantees full export capability during exit scenarios.

Although shadow indexing introduces additional complexity, it enables compliance with Article 28(8).

Open and Portable Data Formats

Institutions must export data in open formats such as Apache Parquet or JavaScript Object Notation Lines. These formats support interoperability and long-term accessibility. Proprietary formats undermine exit feasibility.

Part Seven: Treating Prompts and Observability as Code

Prompt Management Discipline

Institutions must manage prompts as versioned assets. Storing prompts exclusively inside provider interfaces creates unacceptable operational risk.

Version control systems and dedicated prompt management tools allow institutions to adapt prompts when switching models.

Observability Through Open Standards

To avoid dependence on a single monitoring vendor, institutions should instrument systems using open telemetry standards. This approach ensures that logs and traces remain portable and auditable.

Part Eight: Knowledge Distillation as a Fallback Strategy

Preserving Behaviour Without Proprietary Weights

Knowledge distillation allows institutions to train internal models that approximate the behaviour of proprietary models. By capturing production inputs and outputs, institutions can create training datasets for open-weight models.

This approach supports internal fallback deployments and strengthens technical sovereignty.

Contractual Alignment Remains Essential

However, institutions must align this strategy with contractual terms. Exit clauses must explicitly allow use of outputs for business continuity purposes. Without contractual clarity, legal risk undermines technical resilience.

Part Nine: From Documentation to Proof Through Testing

The Priority One Feasibility Snapshot

To demonstrate that exit strategies work, institutions should conduct recurring priority one feasibility snapshots. These audits test export scripts, deploy fallback environments, and measure performance differences.

If quality degradation exceeds defined thresholds, institutions must treat the exit strategy as failed and initiate remediation.

Conclusion: Exit Strategy as a Measure of Trust

The Digital Operational Resilience Act transforms exit strategies from legal boilerplate into engineering reality. For institutions that rely on artificial intelligence, this transformation changes how systems must be designed, tested, and governed.

Vendor lock-in in artificial intelligence no longer represents a commercial inconvenience. It represents a regulatory risk. Institutions that engineer portability, abstraction, and fallback capability gain more than compliance. They gain freedom.

By investing in exit-ready architectures today, financial institutions build durable trust, operational resilience, and long-term strategic autonomy.

Leave a Reply