Responsible AI

AI That Complies Before It Ships

The EU AI Act, GDPR, and Dutch BIO guidelines are not optional extras — they are the baseline for every AI system we build. We design compliance in from the first line of architecture, so your legal and procurement reviewers say yes.

🇪🇺
EU AI Act Compliant
Risk classification & documentation
🔒
GDPR Article 22 Ready
Automated decision safeguards
🏛️
Dutch BIO Aligned
Government baseline security
☁️
Private Azure Tenants
Your data never trains public models

The EU AI Act is not future compliance — it is current law

As of August 2024, the EU AI Act is in force. Prohibited AI practices have been banned since February 2025. High-risk AI system obligations apply from August 2026. Organisations that are already deploying AI — in HR, credit decisions, public services, or critical infrastructure — need to act now.

Non-compliance carries fines of up to €35 million or 7% of global annual turnover. More practically, unapproved AI systems will not pass procurement review for government contracts or enterprise supplier assessments in regulated sectors.

We build compliant AI systems from day one — not retrofitted systems with compliance bolted on as an afterthought.

Unacceptable RiskProhibited under EU AI Act

Prohibited outright — biometric mass surveillance, social scoring, manipulative systems targeting vulnerable groups.

High RiskStrict compliance obligations

Requires conformity assessment, human oversight, and detailed documentation. Applies to HR decisions, credit scoring, law enforcement, and critical infrastructure.

Limited RiskDisclosure requirements

Transparency obligations — users must know they are interacting with an AI system. Applies to chatbots and deepfake detection.

Minimal RiskNo mandatory requirements

AI-enabled spam filters, recommendation engines, and most productivity tools. Freely permitted with no mandatory compliance steps.

Privacy-by-Design: built into the architecture

Privacy-by-Design is not a checklist — it is an architectural approach. The six principles below are implemented at the code and infrastructure level, not as policy documents.

Data minimisation

AI systems only request the minimum data required for the task. No auxiliary data collection for model improvement.

Purpose limitation

Data is processed exclusively for the defined, documented purpose. Scope creep is prevented by architecture, not just policy.

Private model deployment

Azure OpenAI private tenants mean your data never touches the public model endpoint. Your prompts and documents are not logged centrally.

Human oversight by design

High-risk decisions include mandatory human review steps. The AI assists — it does not replace — accountable human judgement.

Audit trails and explainability

Every AI-assisted decision is logged with its inputs, model version, and reasoning trace. Your DPO and auditors can reconstruct any decision.

Right to explanation

Where GDPR Article 22 applies, individuals can request a meaningful explanation of automated decisions affecting them.

Public vs Private deployment
Public Azure OpenAI (api.openai.com)Avoid for sensitive data
  • Queries processed on shared infrastructure
  • Data retention policies may log prompts
  • Not suitable for personal data or trade secrets
  • Cannot be used in most government or financial scopes
Private Azure OpenAI tenant (your subscription)What we deploy
  • Isolated instance in your Azure subscription
  • Microsoft contractually cannot access your data
  • No data used for model training — ever
  • EU data residency guaranteed (Netherlands region)

Your sensitive data never trains a public model

When employees use ChatGPT, Microsoft 365 Copilot, or any public AI tool without a Data Processing Agreement, there is a real risk that internal documents, customer data, or strategy information is used to improve public models. Under GDPR, this can constitute a data breach.

We deploy Azure OpenAI in your own Azure subscription — a private, isolated instance with no connection to Microsoft’s public model endpoints. Your data stays in your infrastructure. Your DPO has contractual evidence of this.

This architecture satisfies GDPR Article 28 (processor obligations), Dutch BIO security requirements, and the data residency provisions required for processing citizen data in the public sector.

Fixed-price service

AI Compliance & Security Audit

A structured four-week engagement that produces a complete compliance picture of your current and planned AI systems — mapped against the EU AI Act, GDPR, and Dutch BIO. Delivered as a written report and executive briefing.

€4,950excl. VAT · fixed price
Book the Audit
Audit deliverables
  • AI Act risk classification for each AI system in scope
  • GDPR Article 22 compliance check for automated decision-making
  • Data flow mapping — where personal data enters, moves, and is stored
  • Dutch BIO alignment assessment for government clients
  • Private deployment architecture review (Azure tenant isolation)
  • Remediation roadmap with prioritised, costed actions
  • Executive briefing document suitable for board or regulator presentation