For the last decade, much of Europe’s financial sector lived with a convenient contradiction.
On one side, banks and insurers pushed hard into digital transformation, migrating critical workloads into US hyperscalers like Microsoft Azure, AWS, and Google Cloud. The reasons were obvious: scale, maturity, global reliability, and fast access to advanced AI tooling.
On the other side, Europe’s regulatory environment kept tightening. GDPR raised the bar on data protection. Supervisors increasingly focused on third-party risk. And now, with DORA in force since January 2025, the conversation has shifted from “compliance paperwork” to operational resilience: can you prove critical functions remain available, controlled, and auditable under stress, including geopolitical stress?
At the same time, the US CLOUD Act remains a structural pressure point: it can compel US providers to produce data they control, even if that data sits physically in the EU.
This creates a real minefield. “Standard” cloud architecture patterns that used to be considered reasonable can become systemic risk once you combine DORA’s accountability model with extraterritorial access risk.
This article breaks down:
- why CLOUD Act and DORA collide in practice,
- why “EU data boundary” messaging doesn’t solve sovereignty,
- why GenAI (especially Azure OpenAI) escalates the problem, and
- what an architecture looks like when you design for sovereignty, auditability, and exit readiness.
We’ll also replace the old “governance layer” idea many teams relied on with a more practical approach: an AI control plane designed specifically for regulated finance.
Part I: the legal collision is about jurisdiction, not geography
1. US CLOUD Act: “possession, custody, or control” beats location
CLOUD Act (2018) was built to resolve a key limitation exposed by the Microsoft Ireland case: whether US authorities could compel access to data stored outside the US. The law clarified that a provider can be required to disclose data under its control, regardless of where it’s stored.
The word control matters because it’s interpreted broadly in corporate structures. If a US parent entity has authority over an EU subsidiary, courts may treat the parent as able to compel access. That’s the heart of the sovereignty problem: “EU region” is not the same as “EU jurisdiction”.
2. The FISA 702 shadow: access without notification
Beyond CLOUD Act, FISA 702 raises an even harder issue for regulated institutions:
- orders may come with gag provisions,
- the customer may never be notified,
- intelligence collection has different standards than criminal process.
For a bank, the risk isn’t theoretical. If you cannot confidently reason about confidentiality under a state-actor threat model, you cannot honestly claim you “control” confidentiality outcomes for critical functions.
3. DORA: you can’t outsource accountability anymore
DORA’s core posture is simple: financial entities remain accountable for ICT risk management and resilience, even when services are outsourced.
That changes the game. “The provider is certified” or “the contract says X” is no longer a comfortable shield. Supervisors expect:
- real control over risk,
- evidence you can test and reproduce,
- and proof you can execute exit strategies.
DORA also introduces direct oversight for Critical ICT Third-Party Providers (CTPPs). Even if a provider builds more EU structure around services, that doesn’t automatically remove US legal exposure at the parent level.
4. Where it blows up: GDPR Article 48 + DORA third-party risk
GDPR Article 48 restricts disclosure to third-country authorities unless routed through an appropriate international mechanism (for example MLAT pathways). CLOUD Act intentionally bypasses slow international processes.
So you get an ugly conflict-of-laws reality:
- a provider may be compelled to disclose,
- the disclosure may violate EU expectations,
- and the financial entity still holds responsibility for resilience, confidentiality, and governance under DORA.
This is why regulators aren’t satisfied with “we store data in Europe”. They want architecture that reduces exposure by design, not by marketing narrative.
Part II: GenAI makes the sovereignty problem sharper, faster
1. “EU Data Boundary” helps residency, not sovereignty
Initiatives like an “EU Data Boundary” (keeping processing and storage in EU/EFTA regions) improve data residency. That’s good baseline hygiene.
But sovereignty is a different question: who can be legally compelled to produce or decrypt? If the provider can access keys or systems, physical location doesn’t eliminate extraterritorial pressure.
A useful way to think about it:
- Residency: where the bits sit.
- Sovereignty: who ultimately has enforceable power over the bits (and the keys).
2. Confidential computing is not a silver bullet for LLM workloads
Trusted execution environments (TEEs) and confidential computing are promising, especially for data-in-use. But for large-scale enterprise GenAI, there are practical constraints:
- performance overhead and bottlenecks,
- limited flexibility for multi-GPU scaling in some confidential setups,
- and the uncomfortable “root of trust” question: attestation and control planes are often still provider-operated.
For regulated finance, TEEs can be one control, but they rarely close the full DORA story on their own.
Part III: Sovereign cloud models reduce exposure by removing control
When institutions say “sovereign cloud”, they usually mean one thing: the US hyperscaler doesn’t hold the final keys or admin control.
Two practical patterns show up in Europe:
1. Trustee / Sovereign Controls model
A European operator sits between customer data and the hyperscaler’s platform. The operator controls:
- identity and privileged access,
- encryption key release decisions,
- and operational support boundaries.
The critical mechanism is typically External Key Management (EKM):
- keys are generated and stored outside the hyperscaler’s control, in EU-operated HSMs,
- the cloud services must request key operations,
- and the EU operator can approve or deny key use under policy.
In blunt terms: even if the provider is pressured, they can only hand over encrypted blobs, not readable data.
2. Fully native sovereign model (EU-only stack)
For the most sensitive workloads, institutions choose EU-only providers and stacks that are legally and operationally insulated from extraterritorial reach.
The tradeoff is real:
- fewer managed AI services,
- more self-hosting responsibility,
- more platform engineering.
But for core banking or highest sensitivity use cases, many teams accept that trade.
Part IV: Replace “governance as a document” with an AI control plane
Sovereign infrastructure is necessary, but it’s not sufficient.
DORA doesn’t just ask “where is it hosted”? It asks, implicitly:
- can you prove what happened,
- can you reproduce outcomes,
- can you show controls were applied,
- can you detect drift,
- and can you exit without rewriting your system?
That’s where most GenAI pilots fail. They’re built like demos: prompts in code, ad-hoc logging, unclear dependency chains, and “we’ll add governance later”.
The practical answer: an AI control plane for regulated finance
At Intellectum Lab we treat the missing piece as a control plane: a layer that sits between your business applications and model providers (Azure OpenAI, OpenAI, Anthropic, Mistral, or on-prem Llama). The goal is straightforward: turn a black-box GenAI call into a governed, auditable, testable system.
What the control plane must do in a DORA environment:
1) Per-request audit trail (reconstructable, step-by-step)
Every interaction should be logged with enough detail to rebuild the decision path:
- who initiated the request,
- what prompt was sent (and in which version),
- what sources were retrieved (for RAG),
- model/version/config used,
- outputs produced,
- and which policies/guardrails were enforced.
This is the difference between “we think it answered correctly” and “we can prove exactly how it answered.”
2) Pre- and post-generation guardrails
DORA isn’t only about uptime. A single incorrect answer to a customer or regulator can be a material incident. Guardrails must operate both before and after generation:
- PII detection and handling policies,
- restricted topic / commitment blocking (no accidental promises),
- format and policy validation,
- hallucination and faithfulness checks where feasible.
3) Continuous quality monitoring (not occasional manual review)
In regulated finance, “it worked last month” is not a control. Providers update models, behavior drifts, retrieval changes, and prompts evolve.
A production-grade approach requires:
- golden datasets (curated question sets),
- automated regressions before changes,
- ongoing measurement of faithfulness/recall/hallucination rates,
- dashboards and alerts when quality degrades.
4) Model-agnostic exit readiness (real, technical, not a slide deck)
Exit strategy is where most architectures collapse under scrutiny.
If your application is tightly coupled to one provider’s model behaviors, embeddings, APIs, and safety tooling, your exit plan is mostly fiction.
A control plane enforces provider abstraction:
- routing across providers,
- portability of prompts and retrieval logic,
- feasible embedding migration paths,
- and fallbacks (including on-prem open-source models) for stressed scenarios.
A sovereignty-compatible GenAI pattern: “Sovereign knowledge, rented reasoning”
A safe and realistic architecture for regulated finance often looks like this:
- keep document stores and retrieval indexes inside an EU-sovereign boundary,
- use strict sanitization/minimization policies before anything leaves that boundary,
- send only the minimum necessary context to the model,
- and log every step through the control plane.
This pattern lets teams keep access to best-in-class GenAI while still reducing sovereignty and third-party risk.
Part V: Making it operational under DORA
1) Contracts matter, but evidence matters more
DORA requires strong contractual provisions (audit rights, subcontractor controls, termination rights, data location commitments, etc.).
But even perfect contract language doesn’t replace technical evidence:
- you still need logs,
- you still need reproducibility,
- you still need real control over access and keys,
- and you still need tested exit pathways.
2) Exit strategy must be exercised, not described
A credible “stressed exit” plan typically includes:
- a fallback model option (often on EU-sovereign or on-prem compute),
- documented switching procedures,
- and proof the business workflow continues with degraded-but-acceptable quality.
If the only AI option is locked into one provider, regulators will (rightly) ask how that satisfies resilience expectations.
3) Register of Information: keep it current through real dependency mapping
One of the most painful DORA realities is the “register gap”: services are used in practice but not properly recorded, especially with shadow AI.
A control plane approach helps because it can:
- map actual processing chains (providers, services, vector DBs, moderation layers),
- produce evidence of which services participated in which interactions,
- and support audit-ready reporting without scrambling.
Conclusion: in the DORA era, trust is engineered, not promised
The era of “naive cloud” is over for regulated finance.
Under DORA and the extraterritorial reality of US access laws, using US hyperscalers for critical functions without additional sovereignty and control layers is not just risky. In some scenarios, it becomes very difficult to defend in audit and regulatory review.
The path forward isn’t abandoning GenAI. It’s engineering trust:
- sovereign cloud controls (including external key control where required),
- a control plane that delivers auditability, guardrails, and monitoring,
- and real exit readiness that works in practice, not just on paper.
In a DORA world, trust isn’t what a provider brochure claims.
Trust is what you can evidence, reproduce, and defend.
Leave a Reply