A practical guide for business leaders navigating model choice, vendor lock-in, and regulatory exposure
There is a conversation happening in boardrooms across Europe right now, and it tends to follow the same pattern. A CIO describes how their organisation adopted an AI platform twelve months ago. It worked well. The use cases expanded. Then someone in legal asked a question that nobody had thought to ask at the start: where does the model actually process our data?
That question, in many cases, is the beginning of a longer and more uncomfortable one: did we actually choose this model, or did it just come with the platform?
This blog is shaped by those conversations. We hear them in procurement evaluations, in architecture reviews, in strategy discussions with executives who are starting to realise that the choice of a large language model is not a technical detail buried inside a platform. It is a decision with legal, operational, and strategic weight, and most organisations never consciously made it.
What follows is not a technical deep dive. It is a practical guide for business leaders who want to understand what it means when a solution builder or platform can offer models from multiple providers, why that matters for compliance, and what questions they should be asking right now.
The question that used to be technical is now on the board agenda
A year ago, asking "which LLM are we using?" would have earned you a blank stare in most executive meetings. It was an engineering question. The CTO might know. The rest of the leadership team had no reason to care.
That changed quickly. It changed because regulators started asking. Because auditors started asking. Because clients, particularly those in regulated sectors, started including AI-specific clauses in their vendor assessments. A DPO we spoke with at a financial services firm put it bluntly:
"I was never consulted on the model selection. I found out which LLM we were running when I started the DPIA for a new use case. That should not happen."
The escalation from engineering question to board level concern did not happen because executives suddenly developed an interest in transformer architectures. It happened because the regulatory and contractual environment caught up with the speed of AI adoption. In the Netherlands, this acceleration is visible. Jan Saan, co-founder and CTO of GLBNXT, addressed this directly in a recent interview with Het Financieele Dagblad, published in their AI & Data Sovereignty special edition. His argument was plain:
"European organisations are building their AI strategies on infrastructure that, by design, does not give them control over where their data goes or which model processes it. That is not a technical risk. It is a governance gap."
We didn't choose this model, it came with the platform
This is, by some distance, the most common pattern in the market right now. An organisation adopted a SaaS platform, a collaboration suite, a solution builder's product. The AI capability was bundled. The LLM underneath was a default, not a decision.
Nobody on the procurement side evaluated the model against the organisation's compliance requirements. Nobody asked under whose jurisdiction inference takes place. Nobody checked whether the model could be replaced if the regulatory picture shifted. The AI feature was part of the package, and the package was signed off.
This happens for understandable reasons. Speed matters. When a vendor offers an AI-enabled product that can go live in weeks, there is real business pressure to say yes. The LLM layer is invisible to most stakeholders, and vendors do not go out of their way to make it visible. You would be surprised how many enterprise contracts describe the AI component in two sentences, with no mention of the underlying model provider, the data processing location, or the customer's ability to change either.
We have reviewed vendor contracts where the entire AI capability was covered under a single line item called "intelligent features" or "AI-powered analytics." No model named. No hosting jurisdiction specified. No exit clause for the AI component separate from the broader platform agreement. For organisations that would never accept this level of opacity in a cloud hosting contract, the tolerance for it in the AI layer is remarkable.
The friction shows up later. It shows up when a DPO reopens a Data Protection Impact Assessment because the AI layer was never assessed at the model level. It shows up when procurement tries to add LLM-specific clauses to a renewal and discovers the vendor's architecture does not permit model substitution. It shows up when a legal team flags that the LLM provider is subject to the US CLOUD Act and the organisation's data protection obligations under GDPR Article 48 are in direct tension with that reality.
Compliance is not something you add afterward
There is a persistent idea in some parts of the market that compliance is a wrapper. That you pick the best performing model first, then figure out the regulatory angle second. This gets the sequence wrong.
The compliance posture of your AI deployment is partly determined the moment you select a model and a hosting environment. If inference runs through US-based infrastructure operated by a provider subject to American law, your GDPR obligations do not pause while you sort out a Data Processing Agreement. If the model provider's terms allow them to use inputs and outputs for model improvement, your data minimisation principles are already compromised.
DPOs and legal teams across regulated sectors are pushing back on exactly this pattern. We hear it repeatedly: deployments that were approved by IT, that delivered real value, that are now stalling because the compliance review at the model layer raises issues that nobody addressed during the initial rollout. In the public sector, where the Dutch government's own policies on cloud and data sovereignty apply, the questions are even more pointed. In financial services, supervisory expectations around outsourcing and data governance make the conversation inescapable.
The EU AI Act makes this more concrete. Organisations deploying high risk AI systems carry obligations around transparency, accountability, and documentation that extend through the value chain. If you cannot explain which model you are running, where it is hosted, and under what terms, you have a gap that will only become more difficult to close as enforcement timelines approach.
The capability argument is real, but it is incomplete
Every executive we speak with raises the same point, and they should: "We chose this model because it performs best for what we need." Fair enough. Capability matters. If the model cannot do the job, everything else is academic.
But performance is contextual and it is temporary. The model that leads benchmarks today may not lead them in six months. Pricing structures change. New models appear that outperform on specific tasks while underperforming on others. The pace of change in this market is fast enough that what you evaluate in January may no longer be the strongest option by summer. We have seen organisations complete a three month model evaluation process only to find that the winning candidate had been superseded by the time the contract was signed.
More to the point, the best performing model in the world is irrelevant if you cannot deploy it in a way that satisfies your legal obligations. What the model can do and what deploying it means for your compliance posture need to be evaluated together, not in sequence. Neither question gets to override the other.
The organisations we see navigating this most effectively are the ones that refuse to separate these conversations. They bring the DPO into the model evaluation alongside the technical architect. They treat hosting jurisdiction and data processing terms as selection criteria with the same weight as benchmark scores and latency numbers.
What LLM-agnostic means in plain language
The term gets used a lot. It is worth being precise about what it means in practice, because there is a real difference between platforms that claim flexibility and platforms that actually deliver it.
When a platform or solution builder is LLM-agnostic, it means the customer, or the customer's architects, can select, swap, or combine models from multiple providers without rebuilding the application. The application logic, the user interface, the integrations, all of that stays intact. The model underneath can change.
Think of it the way you think about electricity. You do not want your factory's production line wired to a single energy provider with no ability to switch. You want a standard connection that lets you source power from whoever offers the best combination of price, reliability, and terms. The same principle applies here.
The current reality is different. Most AI platforms are wired to a single LLM provider. The integration is deep, the APIs are proprietary, and switching means a rebuild that nobody planned for and nobody budgeted. This is vendor lock-in, but at a layer that most procurement frameworks have not yet learned to assess.
For business leaders, the practical consequence is simple. If your platform is locked to a single model provider and that provider changes its pricing, its terms, its data handling practices, or its hosting locations, your options are to accept the change or start over. Neither puts you in a strong negotiating position.
The questions that are already appearing in tenders
Something is shifting in how regulated organisations evaluate AI vendors, and it is happening faster than most vendors expected. Procurement teams in financial services, the public sector, healthcare, and legal are adding LLM-specific evaluation criteria to their tenders. These are not theoretical questions. They are showing up in RFPs right now.
The questions we see recurring: can we change the underlying model without re-engineering the solution? Under whose jurisdiction does model inference take place? What happens to our data during and after processing? If our regulatory obligations change next year, how quickly can we adapt our AI infrastructure? Does the platform support models from multiple providers, or are we locked into one?
These questions do not come from paranoia. They reflect a market that is growing up. Organisations have learned, sometimes the hard way, that technology decisions made in haste create contractual and operational constraints that outlast the original business case. The cloud migration era taught this lesson expensively. The AI adoption cycle is teaching it again, and faster.
What is worth noting is who is asking these questions. It is not only the usual compliance gatekeepers. We see CTOs raising them because they want architectural flexibility. We see procurement leads raising them because they have been burned by vendor lock-in before and recognise the pattern. We see board members raising them because they have read enough about AI governance to know that the regulatory picture is not settled and they do not want to be caught on the wrong side of it. The fact that these conversations are converging from different directions tells you something about where the market is headed.
The organisations with the most options will move fastest
The organisations that will adapt best to whatever comes next, in AI regulation and in AI capability, are not the ones that picked the "best" model in 2024 or 2025. They are the ones that made sure they could change their mind.
Choice at the model layer is a structural requirement for any organisation operating in a regulatory environment that is still taking shape. The European regulatory framework for AI is not finished. Enforcement mechanisms under the EU AI Act are still materialising. GDPR interpretations around AI data processing continue to develop through case law and supervisory guidance. Building on an architecture that locks you into a single model, a single jurisdiction, and a single set of vendor terms is a bet that nothing will change. That is not a bet most compliance officers would sign off on.
The platforms and solution builders that understand this are already designing for it. They are building abstraction layers that let customers route different workloads to different models. They are making hosting jurisdiction a configurable parameter, not a fixed constraint. They are writing contracts that separate the AI layer from the broader platform agreement so that a model switch does not require a full re-procurement. This is where the market is going. The question is whether your current architecture is going there too.
The question for business leaders is not which LLM is best. The question is whether they have the freedom to choose at all, and whether that freedom will still be there twelve months from now.
GLBNXT builds sovereign AI infrastructure for regulated European enterprises, offering LLM-agnostic, GDPR-compliant platforms hosted entirely within the EU. Learn more at glbnxt.com
References
European Commission, AI Act: Regulatory Framework for Artificial Intelligence
EUR-Lex, Regulation (EU) 2024/1689 (AI Act), Official Journal
EDPB, Guidelines 02/2024 on Article 48 GDPR
Kennedys Law, The EU AI Act implementation timeline (March 2026)
