Sovereignty Level Assessment — how to choose the right level of AI sovereignty and not overpay
- Mar 18
- 7 min read
Series: CDF 1.3.2 in practice — 6 articles on the methodology of sovereign AI implementations
This is the third article in the series (S2). In previous issues: Cognitive SLA (S1) — AI reasoning quality metrics; From pilot to production (G1) — eliminating Pilot Purgatory. The SAVANT series focuses on compliance, quality measurement, and oversight. The GENESIS series focuses on agent governance, scaling, and operations. CDF 1.3.2 is a proprietary methodology developed by allclouds.pl, based on ISO/IEC 42001:2023 and the EU AI Act.

Many simplifications have arisen around sovereign AI today. Some organizations assume that since AI concerns data, regulation, and security, the only reasonable response is to completely isolate the environment. Others go in the opposite direction and try to keep all AI initiatives in the public cloud model because it is faster and cheaper.
Both approaches can be flawed — and both can be costly. That's why CDF 1.3.2 introduces Sovereignty Level Assessment: a formal assessment of the level of sovereignty required for a specific business area, conducted before any architectural decisions are made.
It's not about whether AI should be sovereign
The real question is different: what level of sovereignty is actually needed for a given process, data, and regulatory context. No more, no less.
CDF 1.3.2 describes two common mistakes organizations make when choosing an AI architecture. The first is over-engineering — oversizing the level of sovereignty, which increases TCO, lengthens implementation, and reduces operational flexibility. The second is under-engineering — underestimating regulatory, operational, or geopolitical requirements, leading to the risk of non-compliance, incidents, or loss of control over a critical area.
This is an important distinction because the decision on sovereignty should not be intuitive or ideological. Not every AI initiative requires an air-gapped architecture, but not every initiative can safely remain in the public cloud. A structured analysis is needed, not a general slogan of "we want sovereign AI."
What is Sovereignty Level Assessment
Sovereignty Level Assessment, or SLA-S for short, is a mandatory part of Phase 0 in CDF 1.3.2. Its purpose is to determine what level of sovereignty for AI infrastructure, models, data, integration, and operations is actually required — rather than assuming a maximum level of isolation for the entire transformation program in advance.
In practice, SLA-S acts as a diagnostic tool. After it is performed, the organization receives: a sovereignty level classification for a given area, a recommended Sovereign Deployment Pattern, regulatory, operational, and architectural justification, as well as input for a 3-5 year TCO Model and Make-vs-Buy Decision Framework.
This is important because the decision on the deployment model is no longer the result of a single opinion of the architect or security department. It becomes the result of an agreed assessment, jointly approved by business, security, compliance, and architecture—and only then does it become a decision input for Phase 1.
The SLA-S result is one of the inputs to Phase 1, in which the 3-5 year TCO Model and Make-vs-Buy Decision Framework are designed. This means that architectural decisions are not made in isolation from costs — the two elements are closely linked.
Four dimensions of assessment
CDF 1.3.2 assesses the level of sovereignty in four dimensions. As a result, the analysis is not limited to the question of data location, but covers the full context of risk and dependencies.
Dimension | What it assesses | Example levels |
Data Classification | Sensitivity and value of data processed by the AI system | Low: public dataMedium: personal data, trade secretsHigh: classified information |
Regulatory Exposure | Degree of subjection to sectoral and legal requirements | Low: no sectoral requirementsMedium: GDPR, NIS2/uKSCHigh: DORA, defense sector, government requirements |
Operational Criticality | Impact of AI system failure or unavailability on the organization | Low: supporting processMedium: business processHigh: mission-critical process |
Supply Chain Dependency | Acceptability of using external services, APIs, models, and hosting | Low: public cloud acceptableMedium: trusted EU cloudHigh: full air-gap isolation |
A multidimensional approach is of great practical importance. An organization may process moderately sensitive data but operate in an area of very high operational criticality. It may also have low operational risk but very strong regulatory constraints. Therefore, the CDF assumes that the final level is not based on the "average" of the dimensions — the highest level of risk or restriction in any of the critical dimensions determines the minimum acceptable level of sovereignty.
The chosen level of sovereignty directly affects how AI quality is measured. For example, in the Air-Gapped Defense model, monitoring the Knowledge Freshness Index requires different procedures than in the hybrid model — the knowledge base cannot be updated online. We describe more about reasoning quality metrics in the article "Cognitive SLA — why 99.9% uptime is not enough."
The Minimum Sufficient Sovereignty Principle
The key idea behind SLA-S is Minimum Sufficient Sovereignty. It means selecting a level of isolation, location, and control over the AI system that is sufficient to meet regulatory, security, business continuity, and IP protection requirements — but no more than necessary.
This is a very mature architectural approach. On the one hand, it protects the organization from building overly heavy and costly infrastructure for processes that do not require full isolation. On the other hand, it protects against implementing an overly light model in critical or highly regulated areas.
In practice, this means moving away from the "one architecture for all" mindset. The methodology allows — and even assumes — that different processes within the same organization may require different levels of sovereignty.
Three levels of target architecture
The SLA-S result is mapped to three main levels of AI sovereignty and corresponding implementation patterns.
Level | Name | Target pattern | Profile |
Level 1 | Hybrid Sovereignty | Hybrid Sovereign-Cloud | Standard company data, supporting processes |
Level 2 | Full Sovereignty | Fully On-Premise | Critical know-how, key entities |
Level 3 | Isolated Sovereignty | Air-Gapped Defense | Defense, critical national infrastructure |
Level 1: Hybrid Sovereign-Cloud — base models or computational workloads can run in a trusted cloud environment, but critical data, organizational memory, the RAG layer, and access control remain under the organization's control. Used where a balance between flexibility, speed, and compliance is needed.
Level 2: Fully On-Premise — the entire core AI layer runs on the customer's on-premises infrastructure. Models, data, agent orchestration, logs, and integrations remain in a closed environment. Used where data sovereignty, operational control, and vendor risk mitigation are critical.
Level 3: Air-Gapped Defense — complete physical isolation of the environment, no internet connection, and no operational dependencies on external services. Updates and model transfers are carried out according to strict security procedures. Used in the defense sector and for particularly sensitive government systems.
One organization, several levels of sovereignty
This is one of the most important conclusions from CDF 1.3.2: an organization does not have to choose a single level of sovereignty for its entire AI program.
The methodology provides specific examples. An HR process using personal data and standard internal workflows can operate in a Hybrid Sovereign-Cloud model. A process supporting operational decisions in energy, telecommunications, finance, or defense may require Fully On-Premise. An AI system for classified information, military environments, or isolated critical infrastructure may require Air-Gapped Defense.
This means that a mature AI strategy should segment use cases in terms of sovereignty rather than treating them as a single technology category. This is what most often allows regulatory compliance to be combined with reasonable TCO and a sensible pace of implementation.
SLA-S as an economic tool
Sovereignty Level Assessment is therefore not only a security tool, but also an economic and architectural tool. It helps to avoid two costly situations: overpaying for excessive isolation where a hybrid model would suffice, and implementing too light a model in an area that requires full localization or separation.
The SLA-S result goes directly into the 3-5 year TCO model designed in Phase 1 and into the Make-vs-Buy Decision Framework. This ensures that the organization does not make architectural decisions in isolation from costs, and does not estimate costs in isolation from sovereignty requirements.
Why this stage should happen before architecture
In many projects, the decision on the implementation model is made too early. First, the technology or hosting is selected, and only then does the team try to match regulatory requirements, security, and the operating model.
CDF reverses this order. First, a Data Gravity Assessment and Sovereignty Level Assessment are performed in Phase 0, and only then, in Phase 1, is the final choice of Sovereign Deployment Pattern made.
This is important because AI architecture today is not just an infrastructure decision. It is a decision about compliance, control, costs, supply chain, and operational resilience. And such decisions require data, not intuition.
Choosing the level of sovereignty is one of the inputs to Scale Path Definition — a mandatory element of CDF that defines the path from pilot to production. We discuss how CDF eliminates Pilot Purgatory and structures the transition to the production environment in the article "From pilot to production in 90 days."
What to ask before implementation
Before choosing an AI implementation model, it is worth asking a few specific questions:
What data and decisions are really at the highest risk?
Does every use case require the same level of isolation?
Which restrictions result from regulations and which from internal security policies?
Where does justified sovereignty end and costly over-engineering begin?
How will the choice of architecture affect TCO, operational flexibility, and the risk of vendor lock-in?
Does the organization have a formal mechanism for assessing the level of sovereignty, or is the decision based on habit?
If there are no structured answers to these questions, the organization is most likely not making an architectural decision—it is guessing. And when it comes to AI implementations in regulated sectors, guessing quickly becomes costly.
Sovereignty Level Assessment brings order to this decision-making moment. Instead of the "cloud or on-prem" debate, it provides a framework for choosing the level of sovereignty that is appropriate for a specific process, specific risk, and specific regulatory context.
Once the architecture is established, the next step is agent governance. In the next article, we describe how to manage a swarm of 50 AI agents without losing control — with Agent Registry, three interaction patterns, and emergency procedures.





Comments