The Case for Sovereign AI in Legal Practice in 2026

The Case for Sovereign AI in Legal Practice in 2026

The Case for Sovereign AI in Legal Practice in 2026

How Dutch Law Firms Can Embrace Technology Without Compromising Client Confidentiality

Sovereign AI in Legal Practice
Sovereign AI in Legal Practice
Sovereign AI in Legal Practice

How Dutch Law Firms Can Embrace Technology Without Compromising Client Confidentiality

The legal profession is at an interesting crossroads right now. Artificial intelligence, particularly generative AI, promises to change how lawyers work. We're talking about everything from legal research and document analysis to drafting and client communication. But this technological leap raises some critical questions about professional obligations, client confidentiality, and the independence of legal practice itself.

At GLBNXT, we've spent the past year talking directly with Dutch legal firms. We wanted to understand their real needs, their concerns, and what they hope to achieve with AI. Through extensive interviews with legal professionals (from junior associates right up to senior partners), we discovered something interesting. There's genuine enthusiasm, but it's tempered by legitimate concerns about professional responsibility.

What Dutch Legal Professionals Are Really Saying

Several themes came up repeatedly in our research conversations with legal staff across the Netherlands. Partners told us they're excited about efficiency gains but deeply anxious about confidentiality breaches. Associates welcomed research assistance but worried about becoming over-reliant on unverified outputs. Compliance officers kept raising flags about data sovereignty and GDPR implications.

One senior partner at a mid-sized Amsterdam firm captured the mood perfectly:

"We see the potential for AI to transform our practice. Imagine reducing document review time by 60% or finding relevant case law in seconds instead of hours. But when we looked at the mainstream AI tools, we hit a wall. The moment we realized our client data could be used to train someone else's model, or that data might flow through servers in jurisdictions outside the EU, we had to stop. Our professional obligations aren't negotiable, no matter how impressive the technology."

This tension between technological promise and professional constraint isn't unique to the Netherlands. The Council of Bars and Law Societies of Europe (CCBE) and the Nederlandse Orde van Advocaten (NOvA) have recently published comprehensive guidance that validates exactly what we heard in our research. Their message mirrors what Dutch lawyers told us directly: AI is valuable, but only when you deploy it with careful attention to the core values of the profession.

The Research Themes: Five Critical Barriers to AI Adoption

Through our interviews, GLBNXT identified five recurring themes that legal firms considered essential before AI could deliver real value:

Data Sovereignty and Confidentiality

This was the big one. It came up in nearly every conversation. Dutch lawyers understood intellectually that mainstream AI tools might use their inputs for training, but the full implications only became clear during our discussions.
The CCBE's October 2025 guide on generative AI confirms these concerns are well-founded. They state it explicitly:

"Users interacting with GenAI tools may unknowingly contribute input data that retrains the model. Without clear disclosures from system operators, individuals could unintentionally expose confidential or sensitive information, unaware of the potential risks."

One compliance officer we interviewed told us about discovering that a popular AI tool they'd been evaluating was processing data through servers in multiple jurisdictions. The contractual terms granted the vendor broad rights over input data. "We were three weeks away from rolling it out firm-wide," she said. "The potential for professional liability was staggering."

Verification and Accuracy

Legal staff repeatedly talked about "hallucinations." That's when AI systems generate plausible but entirely fictional legal content. Several associates mentioned testing various tools and being genuinely shocked at how confidently AI would cite cases that don't exist.

The CCBE guide addresses this head-on. In legal contexts, GenAI output "might produce entirely fictional case law, create non-existent court cases or judicial opinions, falsely attribute quotes to judges or legal scholars or construct seemingly plausible but entirely invented legal arguments."

Our research found that lawyers wanted AI assistance, but they were deeply uncomfortable with tools that couldn't provide verifiable source references. One litigator put it bluntly: "I need to be able to trace every claim back to a primary source. If the AI can't show me exactly where something comes from, it's useless to me professionally. Actually worse than useless, because it creates liability risk."

Professional Competence and Responsibility

This question kept coming up: who's accountable when AI-assisted work contains errors? How do firms ensure quality control?
The NOvA's recommendations address this directly: "De advocaat blijft eindverantwoordelijk voor advies en rechtsbijstand; gebruik AI slechts als hulpmiddel." (The lawyer remains ultimately responsible for advice and legal assistance; use AI only as a tool.)
Several managing partners we spoke with described struggling with implementation frameworks. They wanted to empower their teams to use AI for efficiency, but they needed guardrails to ensure compliance with professional standards. One described their challenge: "How do we create a policy that says 'yes, use AI' while also ensuring every piece of AI-generated content gets meaningful human review? That's the governance puzzle we haven't solved."

Transparency and Client Trust

Our research revealed some interesting differences here. Generation seemed to matter, as did practice area. Some lawyers felt clients should be explicitly informed whenever AI was used. Others argued that AI was simply another tool, no different from legal research databases.
The CCBE provides clear guidance on this:
"As with other technologies and tools, if it can reasonably be assumed that an informed client would object, make conditions, or otherwise have reservations in respect of use of GenAI for the purpose in question, the lawyer should make sure to be transparent with the client."
One partner specializing in privacy law told us: "I advise clients on data protection issues all day. How could I possibly use an AI tool that doesn't meet the same standards I'd recommend to them? The cognitive dissonance would be impossible."

Practical Integration and Workflow

Beyond the ethical and regulatory concerns, lawyers kept mentioning practical problems. They didn't want AI as a separate tool that required context-switching. They wanted it embedded naturally in their existing workflows.
Junior associates described wanting AI research assistance that felt like a natural extension of their legal research database. Partners wanted document review tools that integrated with their matter management systems. Nobody wanted to copy and paste between platforms. (That just risks data exposure anyway.)

The Regulatory Framework: Validating Practitioner Concerns

Here's what's remarkable about the CCBE and NOvA guidance. It aligns almost perfectly with what we heard in our research. These weren't theoretical concerns invented by regulators. They emerged from the lived experience of legal professionals grappling with AI adoption.

The Promise of AI in Legal Work

The potential benefits are real, and the legal professionals we interviewed could articulate them clearly. According to the CCBE's guide, lawyers can expect improvements in efficiency through automated document creation and quick analysis of large document volumes. There's enhanced legal research with faster and more accurate case law identification. And better quality work through error reduction and standardized processes.
These improvements translate into tangible advantages. Potential cost savings, faster case processing, better resource allocation within legal practices. And crucially, more time for strategic advisory work rather than routine tasks. Several lawyers we interviewed estimated that AI could free up 20-30% of their time currently spent on routine tasks. That time could go toward higher-value client counseling.
The NOvA echoes this optimism while grounding it in professional reality: "AI is een waardevol hulpmiddel voor de advocaat, mits ingezet met oog voor de kernwaarden advocatuur: onafhankelijkheid, partijdigheid, deskundigheid, integriteit en vertrouwelijkheid." (AI is a valuable tool for the lawyer, provided it is used with attention to the core values of the legal profession: independence, partisanship, expertise, integrity, and confidentiality.)

The Critical Challenge: Where Standard AI Falls Short

This is where enthusiasm meets sobering reality. And where our research revealed the disconnect between available tools and professional requirements.
The CCBE guide identifies confidentiality as one of the most important professional obligations affected by AI use. The challenge is straightforward but profound: many generative AI tools are configured to use prompts, uploaded documents, images, or audio files for further training of their models.
The NOvA's recommendations are even more direct: "Gebruik geen vertrouwelijke gegevens in gratis tools" (Use no confidential data in free tools) and "Wees je bewust dat hoe minder je betaalt voor een tool hoe meer data er waarschijnlijk gebruikt wordt." (Be aware that the less you pay for a tool, the more data is likely to be used.)
This creates the fundamental tension our research uncovered repeatedly. The very data that would make AI most useful to lawyers (client information, case specifics, strategic considerations) is exactly what cannot be entrusted to typical commercial AI platforms without violating professional obligations.

The Hidden Risks: Technical Challenges with Professional Consequences

Beyond confidentiality, both the regulatory guidance and our research identified several critical risks.

Hallucinations and Fabricated Content: The CCBE explains that AI systems can "generate factually inaccurate or illogical answers." Several lawyers we interviewed had personal stories of discovering fabricated case citations. One litigator described: "I asked an AI tool for cases supporting a particular argument. It gave me five citations. I could only find three in the databases. When I looked closer, two were completely invented. Similar names to real cases, but the citations didn't exist."
Lack of Transparency: Virtually all generative AI systems exhibit what the CCBE calls the "black box phenomenon." Their internal reasoning processes are opaque and difficult to interpret. This was a particular frustration point in our research. Lawyers are trained to show their work, to trace reasoning from premises to conclusions. AI systems that can't explain how they reached an output violated their professional instincts.
Complex Data Flows: The NOvA emphasizes the need to "Ken de datastromen" (Know the data flows). That means understanding where data are stored and processed, which services process personal data, and ensuring that input and output remain within the firm's environment. Our research found most lawyers had little visibility into these technical details. Yet they bore full professional responsibility for data protection.
Competence and Training: Both organizations stress that professional competence extends beyond legal knowledge to technical understanding. As the CCBE states, lawyers must "understand the capabilities and limitations of all technological solutions they use for their work, including GenAI." This created real anxiety among the lawyers we interviewed. How much AI literacy is enough? Where do they find reliable training?

GLBNXT's Response: Building Sovereignty into the Solution

Our research findings drove our product development directly. We realized something important: for AI to deliver value in legal practice, it couldn't just be bolted onto existing commercial AI platforms. It needed to be architected from the ground up with professional obligations as first-order requirements, not afterthoughts.

This is why GLBNXT developed a 100% sovereign AI solution with these core principles:

Zero Training on Client Data

Every prompt, every document, every piece of information processed through our system remains yours. We don't use client data to train or improve our models. This isn't just a policy choice. It's technically enforced in our architecture.
This addresses what the CCBE identifies as the primary confidentiality risk: "data that is entered into the user interface may be stored and re-used by the provider for purposes such as training or refining and improving the AI model."
One partner who participated in our research and now uses our platform described the difference:

"Finally, I can actually use AI for real work. I can upload a confidential memorandum, ask questions about specific clauses, draft responses. All without wondering whether I've just compromised client confidentiality."

EU-Hosted Infrastructure Under Your Control

All data processing occurs either on GLBNXT infrastructure or verified EU-hosted servers. You know exactly where your data resides, who has access, and how it's protected. This addresses the NOvA's recommendation to "Zorg dat input en output binnen de kantooromgeving blijven." (Ensure that input and output remain within the office environment.)
For Dutch firms, this has practical implications beyond compliance. One IT manager we work with described it: "We can finally give our lawyers a clear answer when they ask 'Is it safe to use AI for this?' Instead of complex conditionals and risk assessments, we can say 'yes, with our approved platform.'"

Verifiable Sources and Transparency

Our system doesn't just provide answers. It provides citations you can verify. Every legal claim links back to primary sources. This directly addresses the hallucination problem that emerged in both our research and the CCBE guidance.
The CCBE emphasizes this: "Lawyers should verify the output of a GenAI before it is utilised in their work (where the use case requires), understand the capabilities and limitations of all technological solutions they use for their work, including GenAI."
Our design philosophy makes verification natural rather than burdensome. One associate told us: "With other tools, checking sources felt like doing the research twice. With GLBNXT, I can see immediately where information comes from and assess source quality as I work."

Integration with Legal Workflows

Rather than requiring lawyers to switch between platforms, our solution integrates with existing legal research databases, document management systems, and matter management tools. This addresses the practical integration concerns we heard repeatedly in our research.

The Professional Obligations Framework in Practice

The CCBE's Charter of Core Principles and the NOvA's recommendations converge on several obligations that informed our platform design:

Verification is Non-Negotiable
The NOvA recommends: "Verifieer altijd de output." (Verify the output of AI before it is used in your work.) The CCBE emphasizes checking citations, case law, and facts manually before use. They also stress using only tools with source references so output is verifiable.
Our platform is built around this principle. Every response includes source links. We make verification easy, not an afterthought.
Lawyer Remains Responsible
As the NOvA states: "De advocaat blijft altijd zelf verantwoordelijk voor het uiteindelijke advies en de bescherming van cliëntbelangen." (The lawyer always remains personally responsible for the final advice and protection of client interests.)
This shaped our approach to AI assistance. We provide tools that enhance human judgment, not replace it. Our interface design constantly reinforces human responsibility. AI suggests, lawyers decide.
Privacy by Design
The NOvA recommends: "Pas privacy-by-design toe." (Apply privacy by design.) This includes documenting considerations for AI tool use, never entering confidential or client data into public AI models, and conducting Data Protection Impact Assessments when processing personal data.
With GLBNXT, privacy by design isn't aspirational. It's architectural. The system can't expose data it never collects. Processing stays local or within controlled EU infrastructure. We provide documentation tools to help firms satisfy their DPIA obligations.
Transparency with Clients
Both organizations stress informing clients about AI use. The CCBE notes that "if it can reasonably be assumed that an informed client would object, make conditions, or otherwise have reservations in respect of use of GenAI for the purpose in question, the lawyer should make sure to be transparent with the client."
We provide firms with templates and tools for client communication about AI use. One partner described their approach: "We now include AI use in our engagement letters. Not as a warning, but as a value proposition. We can honestly tell clients their data remains confidential while we leverage technology to serve them more efficiently."

The Vendor Selection Framework: Questions to Ask

The CCBE provides clear guidance on selecting AI vendors. They say lawyers should "analyze terms and conditions of the AI provider to understand how data entered into the tool is used." The organization recommends verifying contractual conditions covering data ownership, IP rights, liability, exit clauses, and vendor lock-in.
Based on our research and the regulatory guidance, here are the questions Dutch law firms should ask any AI vendor:

  1. Data Storage and Processing: Where exactly is our data processed and stored? Can you provide specific data center locations?

  2. Training and Model Development: Will our prompts and documents be used to train the AI model, now or in the future?

  3. Access Controls: Who has access to our data? Your employees? Subcontractors? Other customers?

  4. Data Portability: What happens to our data if we end the relationship? Can we export everything? What's the deletion process?

  5. Regulatory Compliance: How is compliance with Dutch law, GDPR, and EU AI Act ensured? Can you provide audit reports?

  6. Security Measures: What specific technical and organizational measures protect our data?

  7. Contractual Terms: Who owns input data? Who owns output data? What are the liability terms? What are the exit clauses?

For GLBNXT, we designed our offering so these answers are straightforward. Data stays in the Netherlands/EU with explicitly documented locations. Zero training on client data (architecturally enforced). Access strictly limited to your authorized users. Complete data portability with documented deletion procedures. Full GDPR compliance with regular third-party audits. Bank-grade encryption, isolated processing environments. Clear contracts: you own your data, full IP rights to outputs, transparent liability terms, no lock-in.

Looking Ahead: The Future of AI in Dutch Legal Practice

The CCBE's guide concludes with several considerations for the future that are particularly relevant for the Dutch market.
Self-Regulation and Independence: With AI tools largely developed by a small number of technology companies with considerable market power, there's a real question. Could the independence of the profession as a whole be affected? This concern resonated deeply with the senior partners we interviewed. One described it this way:

"If every firm in the Netherlands uses the same AI platform from the same Silicon Valley company, are we really independent? What if that platform's training data or algorithmic approach biases us toward certain legal theories or strategies? We need European alternatives built with our professional values."

Education and Professional Development: As the NOvA emphasizes: "Basiskennis van (generatieve) AI is essentieel voor iedere advocaat." (Basic knowledge of generative AI is essential for every lawyer.)
Our research found lawyers eager to learn but frustrated by technical jargon and CS-focused training materials. GLBNXT has responded with legal-specific AI literacy programs. We teach prompt engineering through legal research examples. We explain hallucinations through case law scenarios. We demonstrate bias mitigation in client counseling contexts.
Skills Development: The CCBE raises concerns about skills erosion. "If work traditionally given to junior lawyers for training is automated, the profession must ensure that training is reinforced in areas where skills might otherwise be eroded."
Several managing partners we interviewed expressed this exact concern. One said: "I learned legal writing by drafting dozens of research memos as a junior. If AI drafts those memos now, how do young lawyers develop that skill?"
This is why our platform includes learning modes. These are settings that provide guidance and suggestions rather than complete outputs. It helps junior lawyers develop skills while benefiting from AI assistance.

Real-World Impact: From Research to Results

The firms that have implemented GLBNXT's sovereign AI solution report outcomes that align with both the promise identified in our research and the guidance from CCBE and NOvA.
A mid-sized commercial firm reports 40% time savings on contract review while maintaining full partner oversight and verification. A boutique litigation practice cut legal research time in half while improving citation accuracy. A corporate legal department serving a Dutch multinational finally feels confident using AI for cross-border transactions, knowing data sovereignty is maintained.
But perhaps more important than efficiency metrics is the professional confidence these firms describe. As one general counsel told us: "I can finally stop worrying about whether AI use compromises our professional obligations. The technology and the ethics are finally aligned."

Conclusion: Professional Values and Technological Progress

The message from both European and Dutch legal regulators is clear. AI offers tremendous potential for the legal profession, but only when implemented with careful attention to professional obligations and core values. Our research with Dutch legal firms revealed that practitioners understood this instinctively. They saw AI's potential, but they also saw the risks. They were waiting. Not for permission, but for solutions that respected their professional responsibilities.

The CCBE and NOvA guidance validates what we heard in our research and provides a framework for responsible AI adoption. The path forward requires several things. Investment in knowledge about both opportunities and limitations of AI. Careful vendor selection that prioritizes solutions respecting data sovereignty and confidentiality. Clear policies establishing firm-wide AI guidelines that align with NOvA and CCBE recommendations. Ongoing verification, never treating AI output as final without human review. And client transparency, keeping clients informed about AI use in their matters.

For Dutch law firms, the question isn't whether to use AI. That future is already here. The question is whether to use AI solutions that align with professional obligations or compromise them.

At GLBNXT, our research-driven approach led us to a clear conclusion. Lawyers need AI that's built for their profession from the ground up. Not consumer AI tools with legal applications grafted on, but sovereign solutions where professional obligations are first-order requirements. In a profession built on trust, confidentiality, and independence, data sovereignty isn't a technical feature. It's a professional imperative. Solutions that prioritize EU hosting, zero training on client data, and complete transparency aren't just technically superior. They're professionally essential. They allow lawyers to embrace the efficiency and capabilities of AI while maintaining the core values that define the legal profession. Because ultimately, technological progress and professional values shouldn't be in tension. They should advance together.

References

CCBE Guide on the Use of Generative AI by Lawyers

NOvA Aanbevelingen AI in de advocatuur

The recommendations discussed in this article are drawn from the CCBE Guide on the Use of Generative AI by Lawyers (October 2025) and the NOvA Recommendations on AI in Legal Practice (2025). GLBNXT's research involved interviews with legal professionals at Dutch law firms between 2024-2025. To learn more about sovereign AI solutions for legal practice or to schedule a demonstration, visit www.glbnxt.com.

© 2026 GLBNXT B.V. All rights reserved. Unauthorized use or duplication is prohibited.

This website and its contents are the exclusive property of GLBNXT. No part of this site, including text, images, or software, may be copied, reproduced, or distributed without prior written consent from GLBNXT B.V. located at Druivenstraat 5-7, 4816 KB Breda, The Netherlands, registered with the Dutch Chamber of Commerce (KvK) under number 95536779. VAT idenitification numer (VAT ID) NL867171716B01. All rights reserved.

This website and its contents are the exclusive property of GLBNXT. No part of this site, including text, images, or software, may be copied, reproduced, or distributed without prior written consent from GLBNXT B.V. located at Druivenstraat 5-7, 4816 KB Breda, The Netherlands, registered with the Dutch Chamber of Commerce (KvK) under number 95536779. VAT idenitification numer (VAT ID) NL867171716B01. All rights reserved.

This website and its contents are the exclusive property of GLBNXT. No part of this site, including text, images, or software, may be copied, reproduced, or distributed without prior written consent from GLBNXT B.V. located at Druivenstraat 5-7, 4816 KB Breda, The Netherlands, registered with the Dutch Chamber of Commerce (KvK) under number 95536779. VAT idenitification numer (VAT ID) NL867171716B01. All rights reserved.