Compliance
We align with international security and AI governance standards to ensure our platform meets the requirements of regulated industries and enterprise customers.
Security & Privacy Standards
Our internal controls are designed following ISO 27001 principles for information security management and NIS2 requirements for network and information security. We are pursuing formal ISO 27001 and NIS2 certification with completion targeted for Q4 2025.
Building on this foundation, we will achieve certifications for ISO 27017 (cloud services security), ISO 27018 (protection of personally identifiable information in public clouds), ISO 42001 (AI management systems), and NEN 7510 (information security in healthcare environments) by the end of Q2 2026.
Our compliance and assurance reporting follows ISAE 3402 and SOC 2 Type II standards, providing independent verification of our control environment. We are pursuing both certifications with completion targeted for Q4 2026, ensuring customers have access to comprehensive independent audit reports of our control effectiveness.
European Regulatory Compliance
As a European AI platform company, we design our operations to comply with EU regulatory requirements from the ground up. Our infrastructure and data handling practices ensure full compliance with the General Data Protection Regulation (GDPR), as detailed in our Data Privacy section. We align our security controls with the Network and Information Security Directive (NIS2) requirements for essential and important entities, ensuring resilience and incident response capabilities that meet EU cybersecurity standards.
We actively monitor and prepare for the EU AI Act requirements, maintaining readiness for obligations as they come into effect. Under the AI Act classification framework, GLBNXT acts as a provider of general-purpose AI infrastructure (GPAI). The final risk classification of AI systems built on our platform depends on specific customer use cases and deployment contexts. We provide customers with documentation and tools to support their own AI Act compliance obligations when deploying AI systems through our platform.
AI Governance Framework
Our AI governance model is built on six foundational pillars derived from our proprietary AI Maturity Framework. These pillars guide how we design, deploy, and monitor AI capabilities across our platform.
We begin with systematic risk identification, assessing potential harms and unintended consequences of AI systems before deployment. Our data quality and bias control processes ensure training data is representative, regularly validated, and monitored for potential biases that could lead to unfair or discriminatory outcomes.
Human oversight remains central to our approach. We design systems that keep humans in control of critical decisions, with clear escalation paths and the ability to override automated recommendations. Our commitment to explainability and transparency means we provide clear information about how AI models make decisions, what data they use, and the limitations of their capabilities.
Accountability and auditability are embedded in our platform architecture. We maintain comprehensive logs of AI system behavior, decisions, and interventions, enabling both internal review and customer audits. This supports our final pillar of continuous improvement, where we systematically evaluate AI system performance, learn from incidents, and refine our approaches based on real-world outcomes.
Audit Rights & Assurance Reporting
Enterprise customers have the right to request evidence of our security and compliance controls. We provide our Security & Governance Overview document, which summarizes organizational responsibilities, technical and administrative controls, data handling processes, and our certification roadmap. Upon completion of our SOC 2 audit, we will make Type II reports available to customers under NDA.
Customers with specific audit requirements can request additional documentation or, in the case of enterprise agreements, negotiate audit rights that allow independent assessment of our controls relevant to their use of the platform.
Subcontractor Compliance
We hold our subcontractors and sub-processors to the same high standards we apply to our own operations. Before engaging any third party that will handle customer data or support critical platform functions, we conduct due diligence to verify their security certifications, compliance posture, and adherence to relevant regulatory requirements. Our subcontractor agreements include specific security and compliance obligations, with ongoing monitoring to ensure continued adherence.
A complete list of subcontractors and their compliance credentials is maintained on our platform and available to customers.
Regulatory Monitoring & Adaptation
The regulatory landscape for AI and data protection continues to evolve rapidly. We actively monitor regulatory developments at EU and member state levels and maintain relationships with legal advisors specializing in technology regulation. This ensures we can anticipate new requirements and adapt our platform and practices proactively rather than reactively.
We communicate significant regulatory changes that may impact our customers through our customer portal and direct outreach, providing guidance on any actions customers may need to take to maintain their own compliance.