top of page
SAVANT_AI - logo RGB - White - Full.png

SOVEREIGN COGNITIVE SYSTEM

Open Source or vendor lock-in: why the world's largest banks are choosing open AI stacks - and what this means for your organization

  • Mar 8
  • 5 min read

Imagine you are building a strategic AI system for your organization. You choose a leading cloud provider, integrate its models, and build processes around its API. The system works great. For two years, everything goes according to plan.


Then the provider changes its pricing terms by 40%. Or a regulatory authority in your country decides that the data processed by this system cannot leave the EU. Or the US Department of Justice issues a warrant for access to data stored by this company — and under the US CLOUD Act, the provider must comply, regardless of the fact that your servers are located in Warsaw.


These are not scenarios from science fiction movies. These are real risks. And that is why several of the world's largest banks have made a strategic decision that may seem surprising at first glance: they have adopted an "open-source first" strategy for their generative AI stacks.


Three regulations that are changing the economics of the cloud

Before we get into the technicalities, it is worth understanding why the topic of open source and AI sovereignty has become urgent right now. It is about the convergence of three regulatory vectors that together create a completely new legal reality for any organization processing data in the cloud.


US CLOUD Act (2018, effects to date)

The Act requires US-registered companies to disclose data upon request by law enforcement agencies — even if the data is physically located abroad. This means that a European company using AWS, Azure, or Google Cloud does not have full legal protection for its data, even if the servers are located in Frankfurt or Warsaw.


EU AI Act (effective from 2025)

The regulation imposes audibility, transparency, and risk management obligations for high-risk AI systems. AI systems enclosed in external black boxes of providers become a legal risk — the organization must prove to the regulator that it understands how the model works and can intervene. Without access to the code and architecture of the model, this is impossible.


China's Data Security and Cybersecurity Law

Although less obvious to European organizations, it has indirect significance: companies using AI solutions based on models trained by Chinese technology companies may be subject to data transfer requirements to China.


Together, these three regulations create a situation where an organization can comply with the law in one country while breaking it in another — simply by choosing a cloud provider or AI model. This is not a problem that can be solved with a better lawyer. It is an architectural problem.


Why banks chose open source — and what you can learn from them

A trend that started in the financial sector and is rapidly spreading to other regulated industries: several of the world's largest banks have adopted open-source-first strategies for their generative AI stacks, including agents, vector databases, API gateways, and observability layers.

There are three reasons, each equally important:


1. Auditability and regulatory compliance

A bank must be able to explain to the supervisory authority how each algorithm that influences credit, scoring, or transaction decisions works. A closed SaaS model does not allow this. An open-source model does. The regulator can see the architecture, training data, and evaluation mechanisms. This is not a luxury—it is a legal requirement in an increasing number of jurisdictions.


2. Portability and avoiding vendor lock-in

Open frameworks — LangChain, Haystack, Ray, Kubernetes — allow for multi-cloud deployment, where AI system components can be transferred between providers or migrated on-premise without rewriting the entire stack. This is a literal implementation of the portable architecture principle, which we recommend as one of six digital sovereignty strategies for CIOs.


3. Control over training data and fine-tuning

Banks have millions of documents, transactions, and customer interactions — data that is both their greatest asset and their greatest compliance risk. Open source allows for fine-tuning models locally, without sending data to external APIs. The data never leaves the controlled environment.


Open source as a layer of sovereignty — not ideology, but engineering

Important caveat: an open-source-first strategy does not mean rejecting all external providers. It means consciously choosing which layers of the stack must be open and controlled, and where it is possible to use external services.


Open source strategies in AI stack layers — in practice, it looks like this:


Key principle: anything that touches sensitive data or influences high-risk decisions must be open, auditable, and locally controlled. The rest can benefit from global platforms, provided that the architecture remains portable.

 

5 questions to ask your AI provider before signing a contract

Whether you are in the middle of a tender, evaluating existing contracts, or just building your AI strategy, these five questions will separate truly sovereign providers from those who use the word "sovereign" as a marketing label:

  1. Can you show me the full model architecture and technology stack—including dependencies on external services and APIs?

  2. Where does my data physically reside during training, fine-tuning, and inference—and under what legal jurisdiction?

  3. If the contract is terminated, can I export the model, data, and entire configuration without losing functionality?

  4. How does the system meet the auditability requirements of the EU AI Act — can I see decision logs, model weights, and risk documentation?

  5. What happens to my data if your company is acquired, goes bankrupt, or changes its terms of service?

If the provider cannot answer these questions precisely and in writing, you are at risk of vendor lock-in, regardless of how their marketing looks.


How GENESIS-AI and SAVANT-AI are built on open standards

At allclouds.pl, we decided from day one that the architecture of our products would be based on open standards — not because it's trendy, but because our customers are regulated organizations that need to be able to answer each of the above questions without nervously glancing at contract clauses.


SAVANT-AI — our sovereign cognitive system for enterprises — runs on-premise or in a trusted European cloud, uses open orchestration frameworks, supports local open source models (including Polish-language variants of Mistral and Llama), and provides full auditability of every decision in accordance with the requirements of the EU AI Act.


GENESIS-AI — an autonomous software generation platform — is built on open standards, which means that the code generated by the system belongs 100% to the customer, is not shared with external APIs, and can be audited, modified, and deployed without any licensing restrictions.


Both platforms meet NIS2 requirements for software supply chain security — because we know that for our customers in the financial, energy, and public sectors, this is not an option but a prerequisite.


Sovereignty through openness — not isolation

The paradox of sovereign AI is that true sovereignty does not come from closing yourself off to the world — it comes from having an architecture that gives you choice. Choice of supplier. Choice of jurisdiction. Choice of model. Choice of audit path.


Open source is a tool for that choice. It does not eliminate all dependencies — but it makes those dependencies conscious, controlled, and reversible.


In a world where the US CLOUD Act, EU AI Act, and escalating technological geopolitics are redefining what "my data" and "my AI system" mean, open standards-based architecture is no longer a philosophical choice. It is becoming both a competitive advantage and a compliance requirement.


Want to see what an open AI stack looks like in practice?

Join us for a technical session with a SAVANT-AI architect — we will show you a complete dependency map of our stack, documentation of compliance with the EU AI Act and NIS2, and answer each of the five questions above in relation to our system.

 
 
 

Comments


bottom of page