The Ultimate DORA Vendor Assessment Checklist for AI Solutions

The entry into force of Regulation (EU) 2022/2554, better known as the Digital Operational Resilience Act (DORA), marks a real shift in how EU financial institutions must think about risk. From 17 January 2025, the job is no longer just “manage capital well”. It’s also: prove your ICT can withstand disruption in a world of escalating threats.

At the same time, banks are rolling out generative AI (GenAI) and large language models (LLMs) into processes that are increasingly business-critical: customer support, fraud detection, compliance support, credit decisioning, internal knowledge search, and more.

That timing creates a tricky regulatory reality. Traditional vendor assessments (financial checks + a couple of security certificates like ISO 27001) are not enough for AI vendors. DORA pushes for supply chain transparency, business continuity guarantees, and a realistic exit strategy — meaning you must be able to leave a vendor without breaking your operations.

With AI, those requirements become more technical than most procurement processes are ready for. “Code” becomes probabilistic models. “Databases” become vector stores. “Updates” become silent model swaps that change behavior.

This article translates DORA’s core requirements (especially the contract and oversight expectations in Articles 28–30) into a practical, expert-level checklist tailored to AI solution vendors, including RAG architectures, fine-tuning, and sub-outsourcing dependencies.

Chapter 1. The regulatory imperative: DORA meets AI reality

1.1. Third-party risk management is no longer optional “guidance”

DORA fundamentally changes ICT third-party risk management. What used to be “soft law” in guidance becomes enforceable obligations, with direct accountability at the management body level.

A key principle in Article 28: the financial entity remains responsible for operational resilience even when services are outsourced. If a vendor’s AI credit scoring fails, you can’t simply point to the SaaS provider.

This is harder for AI because you’re being asked to control a system that can behave like a “black box.”

Here’s the gap DORA forces you to close:

Risk management aspectTraditional softwareAI solutions (GenAI/LLM)What DORA effectively demands
DeterminismSame input → same outputProbabilistic output; hallucinations and driftOperational risk controls (incl. monitoring)
TransparencySource code can be auditedModel weights/data often opaqueAccess to info + auditability
Data portabilityStructured DBs migrate cleanlyVector DBs + embeddings tied to modelExit strategy that actually works
Supply chainLibraries, OS, dependenciesFoundation models, cloud GPUs, vector DBs, orchestration frameworksSub-outsourcing oversight + concentration risk

1.2. DORA and the EU AI Act: the “double lock”

Financial institutions are also navigating the EU AI Act, where some financial AI systems (like creditworthiness assessment) are typically treated as high-risk, triggering requirements around data quality, transparency, and human oversight.

A useful mental model:

  • DORA is the infrastructure resilience foundation.
  • AI Act is the model governance and safety layer.

You can’t meaningfully “human oversee” a system that is unavailable due to provider outages. And if a vendor fine-tunes on bank data, you’re instantly in a three-way intersection: GDPR + DORA + AI Act.

1.3. Critical or Important Functions (CIF): where the rules get strict

DORA centers on whether a service supports a Critical or Important Function (CIF) — functions whose failure could materially impact financial performance, reliability, or service continuity.

In AI, many use cases naturally fall into CIF territory:

  • Fraud detection (failure can mean direct losses + AML issues)
  • Algorithmic trading (latency or API outage can cause market losses)
  • Customer service chatbots (if they’re a primary channel)

For vendors supporting CIFs, DORA expects much stronger contractual controls, including involvement in advanced testing (like TLPT) and tough termination/exit conditions.

Chapter 2. Due diligence and risk assessment (before you sign)

2.1. Supply chain mapping for AI is non-negotiable

Traditional “Know Your Vendor” checks are not enough. DORA pushes you to understand sub-outsourcing risk, especially for critical functions.

A typical modern AI application (for example, RAG-based) may include:

  • SaaS vendor (the legal counterparty)
  • Cloud provider (AWS/Azure/GCP infrastructure)
  • Foundation model provider (OpenAI/Anthropic/Mistral — often via API)
  • Vector database (Pinecone/Weaviate/Qdrant)
  • Orchestration frameworks (LangChain/LlamaIndex)

Your contract and oversight approach should assume the real system is a chain.

Practical insight: Require a Dependency Map that shows not only libraries, but also API dependencies.
Example: using OpenAI via Azure can mean different enterprise controls than using a public API directly.

2.2. Require an AI Bill of Materials (AI-BOM)

To support DORA-style asset inventory and vulnerability management, ask vendors for an AI Bill of Materials (AI-BOM) — an AI equivalent of SBOM.

Unlike SBOM, an AI-BOM should cover things like:

  • Training data provenance (sources, licenses, personal data exposure)
  • Model lineage and architecture (base weights, fine-tuning approach)
  • Infrastructure used in training/inference (relevant for risk + continuity)

Why it matters: without knowing what model and components are inside the product, tracking model-specific vulnerabilities or supply chain issues becomes guesswork.

2.3. Vendor stability: financial + geopolitical + concentration risk

DORA expects you to consider where the vendor is registered and where processing happens. In AI, this becomes critical because inference may run in places you didn’t expect.

Your assessment should include:

  • Inference geography: where GPUs actually run inference (often different from data storage region)
  • Single-model dependency: if the vendor is “all-in on one model”, you have a single point of failure
  • Model portability (“model agnosticism”): can they switch from one foundation model to another under pressure?

Chapter 3. Contracts: what “good” looks like under Article 30

3.1. Article 30, translated for AI vendors

Article 30 is the heart of vendor contract requirements. For AI, you must adapt the usual clauses to AI-specific realities.

Mandatory clause (Art. 30)AI interpretationKey risk if missing
Clear description of servicesInclude model access, inference latency, accuracy expectations, update policyHidden dependencies; misunderstood service scope
Data processing regionsSpecify regions for vector storage, inference, backupsUnauthorized cross-border processing
Availability & integritySLA for model/API availability, not just the web app; change control for model behaviorSilent model updates → drift and failures
Audit & access rightsAuditability of MLOps, RAG pipelines, logging, incident evidence“You can’t audit hyperscalers” becomes a blocker
Exit strategyExport raw data + metadata + adapters, not just vectorsVendor lock-in via embeddings/proprietary formats
Participation in testingTLPT readiness for CIF; clear rules for pooled testingVendor refusal breaks your compliance posture

3.2. Sub-outsourcing and the “chain of trust”

Even if regulatory technical standards evolve, the risk doesn’t disappear. Your contract should give you the right to:

  • be notified about changes to critical subproviders
  • object to changes that materially increase risk

In AI, switching from one foundation model to another is not a normal “upgrade.” It can materially change safety, jailbreak susceptibility, accuracy, and compliance posture.

Practical insight: Require notification not only for legal subcontractor changes, but for material model and architecture changes (for example, switching foundation model versions).

Chapter 4. Technical resilience and testing (DORT + TLPT + AI-specific testing)

4.1. Why classic security testing is not enough for AI

DORA requires a digital operational resilience testing program. For AI, vulnerability scanning and SAST won’t catch the big problems.

You must test AI-specific attack vectors, such as:

  • Prompt injection
  • Model inversion
  • Data poisoning in RAG pipelines

Your checklist should require evidence of AI red teaming — testing that targets the model’s behavior, not just the network perimeter.

4.2. TLPT for CIF and the reality of multi-tenant SaaS

For CIF-related services, TLPT participation is effectively mandatory.

The practical problem: if a SaaS AI vendor serves 100 banks, they can’t run 100 separate TLPTs against production.

The workable approach is pooled TLPT: one large accredited test, results shared across client institutions. Your vendor should have a documented process for this.

4.3. Testing non-deterministic failure modes

An AI system “failure” may be:

  • a hallucinated answer during a critical flow
  • degraded retrieval quality
  • inference latency spikes that cascade into timeouts

Vendors should demonstrate resilience testing that looks like chaos engineering for AI:

  • What happens if vector search degrades?
  • What happens if inference goes from 2 seconds to 20 seconds?
  • Does the bank’s workflow fail safely?

Chapter 5. Exit strategy and portability (where AI lock-in is most dangerous)

5.1. The embedding trap in RAG

In RAG, documents become embeddings created by a specific embedding model and stored in a vector DB.

If you leave a vendor:

  • exporting vectors alone may be useless
  • vectors are often mathematically incompatible across embedding models and configurations

Checklist requirement: the vendor must guarantee export of:

  • raw source data
  • chunking metadata and strategy
  • mappings between chunks, vectors, and sources
    Not just “here are your vectors, good luck”.

5.2. Ownership of fine-tuning outputs (LoRA/adapters)

If the vendor fine-tunes using bank data (for example, with PEFT/LoRA), the adapter weights may embed valuable domain knowledge.

The contract should clearly state:

  • the bank owns any fine-tuned adapters trained on its data
  • the vendor must deliver them on exit (commonly as .safetensors or .bin)

5.3. Functional equivalence testing

DORA exit strategy expectations aren’t satisfied by “we can export data”. You need confidence service quality won’t collapse during migration.

Require a golden dataset of benchmark prompts and expected outcomes so you can test whether an alternative provider or in-house model is functionally equivalent.

Chapter 6. Incident management and reporting (AI incidents are different)

6.1. What counts as an AI incident

DORA introduces strict timelines for reporting major ICT-related incidents. AI vendors must detect more than “server down”.

Examples of AI-specific incidents:

  • mass generation of toxic or noncompliant content
  • data leakage via prompt injection
  • sudden accuracy collapse or drift

Your checklist should require AI observability: quality and safety metrics monitored in near-real time, not discovered a week later.

6.2. Cooperation during incidents (and cost control)

DORA-style contracts require vendors to assist during incidents without surprise pricing.

AI forensics can get expensive: inference logs, retrieval traces, model/version tracking, and evidence preservation.

Your contract should define:

  • what logs are retained
  • access rules
  • cost model for incident support
    (And importantly, don’t rely on vague “best efforts”.)

Chapter 7. The complete DORA AI vendor assessment checklist

Structured across the vendor lifecycle.

Part A: Governance and strategy

IDCategoryAssessment questionRationale (DORA / Tech)Criticality
G1ClassificationIs it defined whether the AI solution supports a Critical or Important Function (CIF)?Determines strictness of all downstream requirements🔴 High
G2AI ActHas the system been classified under the EU AI Act (High-risk vs limited risk)?Compliance risks overlap and compound🔴 High
G3Sub-outsourcingHas the vendor provided a full dependency map (foundation model → cloud → vector DB)?Supply chain transparency; concentration risk🟠 Medium
G4GeographyCan the vendor guarantee inference + vector storage in the EU (or adequate jurisdictions)?Data sovereignty, cross-border risk🔴 High
G5AI-BOMCan the vendor provide an AI Bill of Materials (e.g., CycloneDX format)?Asset inventory + vulnerability transparency🟠 Medium

Part B: Technical security and testing

IDCategoryAssessment questionRationale (DORA / Tech)Criticality
T1Red teamingDoes the vendor run regular AI red teaming (prompt injection, jailbreaking, etc.)?AI-specific vulnerability exposure🔴 High
T2TLPT (CIF)For CIF: will the vendor support pooled TLPT (e.g., TIBER-EU aligned)?Required for critical setups; operational feasibility🔴 High
T3ObservabilityIs there monitoring for drift (data/concept drift) and hallucination rates?Early anomaly detection🟠 Medium
T4IsolationIs fine-tuning isolated (single-tenant) to prevent cross-client leakage?Data segregation and confidentiality🔴 High
T5EncryptionAre embeddings encrypted at rest and in transit, and who controls the keys?Embeddings can sometimes be reconstructed into text🔴 High

Part C: Exit strategy and contractual protections

IDCategoryAssessment questionRationale (DORA / Tech)Criticality
E1Data exportDoes the vendor guarantee export of raw data + chunking metadata, not just vectors?Prevents technical lock-in🔴 High
E2IP for adaptersDoes the bank own fine-tuned adapters/LoRA weights trained on its data?Protects training investment🟠 Medium
E3Functional equivalenceIs there proven ability to run an equivalent solution elsewhere (tests, benchmarks)?Exit must be realistic, not theoretical🟠 Medium
E4SLA qualityDoes the SLA include model latency, RTO/RPO, and service continuity metrics?AI performance is part of operational continuity🔴 High
E5Change notificationMust the vendor notify you of material foundation model/version changes?Prevents “silent drift” and compliance surprises🟠 Medium

Conclusion

Assessing AI vendors under DORA isn’t just contract review. It’s a cross-functional job that forces legal, security, risk, and engineering teams to speak the same language.

The real challenge is translating “digital operational resilience” into concrete, testable AI realities:

  • model versions and drift controls
  • RAG pipelines and vector portability
  • latency and failure-mode behavior
  • sub-outsourcing and concentration risk

Using the checklist above helps you move beyond formal compliance toward something more valuable: an AI architecture that can keep operating when the inevitable disruptions hit.

If you want, I can also turn this into:

  • a one-page vendor questionnaire (ready for procurement), or
  • a scoring model (weights per question, with pass/fail thresholds for CIF vs non-CIF).

Works cited:

  1. ICT and cyber risk – for DORA entities – CSSF, accessed December 15, 2025, https://www.cssf.lu/en/ict-and-cyber-risk-for-dora-entities/
  2. Regulation (EU) 2022/2554 of the European Parliament and of the Council of 14 – Publications Office, accessed December 15, 2025, https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32022R2554
  3. Guidelines on outsourcing arrangements | European Banking Authority, accessed December 15, 2025, https://www.eba.europa.eu/activities/single-rulebook/regulatory-activities/internal-governance/guidelines-outsourcing-arrangements
  4. DORA compliance checklist: A guide for financial entities and their technology partners, accessed December 15, 2025, https://www.acronis.com/en/blog/posts/dora-compliance-checklist-a-guide-for-financial-entities-and-their-technology-partners/
  5. DORA and AI: How AI Supports DORA Compliance – Legartis, accessed December 15, 2025, https://www.legartis.ai/blog/dora-ai
  6. Decoding the EU AI Act & DORA: A FAIR Perspective on Compliance – The FAIR Institute, accessed December 15, 2025, https://www.fairinstitute.org/blog/eu-ai-act-dora-fair-perspective-on-compliance
  7. Understanding your role in the EU AI Act and DORA compliance – Genesys, accessed December 15, 2025, https://www.genesys.com/blog/post/understanding-your-role-in-the-eu-ai-act-and-dora-compliance
  8. Boost financial security with DORA & AI Act – Sopra Steria, accessed December 15, 2025, https://www.soprasteria.com/insights/details/dora-and-ai-act-boost-your-cybersecurity-posture
  9. Preparing for DORA: Subcontracting ICT Services – Contract Requirements – Morgan Lewis, accessed December 15, 2025, https://www.morganlewis.com/blogs/sourcingatmorganlewis/2024/07/preparing-for-dora-subcontracting-ict-services-contract-requirements
  10. Special topic – Artificial intelligence | European Banking Authority, accessed December 15, 2025, https://www.eba.europa.eu/publications-and-media/publications/special-topic-artificial-intelligence
  11. DORA contractual clauses for agreements with ICT service providers – Fabasoft, accessed December 15, 2025, https://www.fabasoft.com/en/news/dora-contractual-clauses-agreements-ict-service-providers
  12. JC 2024 53 Final Report on draft RTS ob subcontracting DORA – | European Securities and Markets Authority, accessed December 15, 2025, https://www.esma.europa.eu/sites/default/files/2024-07/JC_2024_53_Final_report_DORA_RTS_on_subcontracting.pdf
  13. European Commission adopts revised DORA Subcontracting RTS – a partial retreat on monitoring sub-contractors? | Herbert Smith Freehills Kramer, accessed December 15, 2025, https://www.hsfkramer.com/notes/tmt/2025-posts/european-commission-adopts-revised-dora-subcontracting-rts-a-partial-retreat-on-monitoring-sub-contractors
  14. What Is an AI-BOM (AI Bill of Materials)? & How to Build It – Palo Alto Networks, accessed December 15, 2025, https://www.paloaltonetworks.com/cyberpedia/what-is-an-ai-bom
  15. AI-BOM: Building an AI Bill of Materials – Wiz, accessed December 15, 2025, https://www.wiz.io/academy/ai-security/ai-bom-ai-bill-of-materials
  16. AI Privacy Risks & Mitigations – Large Language Models (LLMs) – European Data Protection Board, accessed December 15, 2025, https://www.edpb.europa.eu/system/files/2025-04/ai-privacy-risks-and-mitigations-in-llms.pdf
  17. The Role of SBOMs in Managing DORA Compliance | FOSSA Blog, accessed December 15, 2025, https://fossa.com/blog/role-sboms-managing-dora-compliance/
  18. DORA + SBOM Compliance: Securing the Software Supply Chain – Anchore, accessed December 15, 2025, https://anchore.com/sbom/dora-overview/
  19. DORA Contract Compliance Guide for Financial Firms – 3rdRisk, accessed December 15, 2025, https://www.3rdrisk.com/blog/dora-compliant-contracts
  20. Switching LLM Providers: Why It’s Harder Than It Seems | Requesty Blog, accessed December 15, 2025, https://www.requesty.ai/blog/switching-llm-providers-why-it-s-harder-than-it-seems
  21. ECB Guide on outsourcing cloud services to cloud service providers – ECB Banking Supervision, accessed December 15, 2025, https://www.bankingsupervision.europa.eu/framework/legal-framework/public-consultations/pdf/ssm.pubcon240603_draftguide.en.pdf
  22. Agentic drift: The hidden risk that degrades AI agent performance – IBM, accessed December 15, 2025, https://www.ibm.com/think/insights/agentic-drift-hidden-risk-degrades-ai-agent-performance
  23. Financial services compliance with the EU AI Act and DORA can be streamlined, accessed December 15, 2025, https://www.pinsentmasons.com/out-law/analysis/financial-services-compliance-eu-ai-act-dora-streamlined
  24. DORA Compliance: Financial Services AI Security Requirements – VerityAI, accessed December 15, 2025, https://verityai.co/blog/dora-compliance-financial-services-ai-security
  25. DORA Deep Dive: Threat-Led Penetration Testing (TLPT) – usd AG, accessed December 15, 2025, https://www.usd.de/en/dora-deep-dive-threat-led-penetration-testing-tlpt/
  26. Threat Led Penetration Testing: what is it and why does DORA require it?, accessed December 15, 2025, https://cybersecurity.bureauveritas.com/services/integrated-approach/dora/what-is-threat-led-penetration-testing
  27. DORA and its impact on UK financial entities and ICT service providers – PwC UK, accessed December 15, 2025, https://www.pwc.co.uk/industries/financial-services/insights/dora-and-its-impact-on-uk-financial-entities-and-ict-service-providers.html
  28. Template – DORA Questionnaire – UpGuard, accessed December 15, 2025, https://content.upguard.com/hubfs/templates/Template%20-%20DORA%20Questionnaire.xlsx
  29. Ensuring Resilience in AI – Booz Allen, accessed December 15, 2025, https://www.boozallen.com/insights/ai-research/ensuring-resilience-in-ai.html
  30. AI Observability: How to Keep LLMs, RAG, and Agents Reliable in Production | LogicMonitor, accessed December 15, 2025, https://www.logicmonitor.com/blog/ai-observability
  31. State of AI: Enterprise Adoption & Growth Trends | Databricks Blog, accessed December 15, 2025, https://www.databricks.com/blog/state-ai-enterprise-adoption-growth-trends
  32. Beyond Vector Databases: RAG Architectures Without Embeddings – DigitalOcean, accessed December 15, 2025, https://www.digitalocean.com/community/tutorials/beyond-vector-databases-rag-without-embeddings
  33. Exit Strategy Requirements: EU DORA Compliance for Data Platforms – Airbyte, accessed December 15, 2025, https://airbyte.com/data-engineering-resources/exit-strategy-eu-dora-compliance
  34. RAG Vector Database Selection: Pinecone vs Weaviate vs ChromaDB for Developers, accessed December 15, 2025, https://customgpt.ai/rag-vector-database-selection/
  35. Efficient Fine-Tuning of Large Language Models with LoRA | Artificial Intelligence – ARTiBA, accessed December 15, 2025, https://www.artiba.org/blog/efficient-fine-tuning-of-large-language-models-with-lora
  36. EU Data Act Significant New Switching Requirements Due to Take Effect for Data Processing Services – Latham & Watkins LLP, accessed December 15, 2025, https://www.lw.com/en/insights/eu-data-act-significant-new-switching-requirements-due-to-take-effect-for-data-processing-services
  37. LLM-Based Code Translation Needs Formal Compositional Reasoning – EECS at Berkeley, accessed December 15, 2025, https://www2.eecs.berkeley.edu/Pubs/TechRpts/2025/EECS-2025-174.pdf
  38. DORA Compliance Checklist & Guide | Bitsight, accessed December 15, 2025, https://www.bitsight.com/learn/compliance/dora-compliance-checklist
  39. Monitor the Health, Performance, and Security of Your AI Application Stack with AI Agent and AI Infrastructure Monitoring | Splunk, accessed December 15, 2025, https://www.splunk.com/en_us/blog/observability/monitor-the-health-performance-and-security-of-your-ai-application-stack-with-ai-agent-and-ai-infrastructure-monitoring.html
  40. DORA is now live – are your ICT contracts compliant? | Herbert Smith Freehills Kramer, accessed December 15, 2025, https://www.hsfkramer.com/notes/tmt/2025-posts/DORA-is-now-live-%E2%80%93-are-your-ICT-contracts-compliant-

Leave a Reply