EU AI Act enforcement:

Proof your vendor's AI
isn't stealing
your data.

You have a third-party AI on your infrastructure. You approved the contract. But you have no idea what it's actually doing on your network. Sigilla enforces the boundary — and gives your auditor cryptographic proof.

Sigilla Dashboard — Network Activity — NextSense Analytics v2.3.1
// Live feed — all connection attempts — kernel-enforced
 
✅ ALLOWED   sensor-db.internal:5432   1,247 connections today
✅ ALLOWED   ntp.internal:123          12 connections today
❌ BLOCKED   142.250.74.46:443         3 attempts (google.com)
❌ BLOCKED   52.86.111.22:443          1 attempt  (AWS endpoint)
 
DATA LEAVING YOUR NETWORK:  0 bytes ✅
COMPLIANCE REPORT:         Ready in 60 seconds →

Free pilot programme open. Three months free — in exchange for honest feedback and one reference call if Sigilla delivers value.

Apply for pilot →
The Problem

Your auditor needs evidence,
not promises.

Regulated industries are deploying third-party AI — and discovering they have no way to prove those systems are contained. That gap is now a legal problem.

PROBLEM 01

The vendor's AI is a black box

You approved the contract. You read the DPA. But you cannot see what the application is actually doing on your network at runtime. It declared three destinations. Is that all it uses? You don't know.

PROBLEM 02

Compliance evidence takes 40+ hours per audit

ISO 27001 auditors ask for network segmentation evidence. GDPR auditors ask for data flow proof. EU AI Act auditors need human oversight logs. Every audit is 40+ hours of manual collection from systems not designed for it.

PROBLEM 03

August 2026 is closer than you think

EU AI Act enforcement begins . High-risk AI systems need audit trails, technical documentation, and human oversight evidence. Fines reach €35M or 7% of global revenue. Most companies have none of it ready.

PROBLEM 04

Trust is not a compliance answer

Your vendor is probably not malicious. But "probably" is not what an auditor accepts. When they ask for proof of data sovereignty, a contract is not proof. A cryptographically signed, independently verifiable log is proof.

How It Works

Three steps to
auditor-ready proof.

STEP 01 — IN PLAIN ENGLISH

Your vendor tells Sigilla exactly which servers their AI is allowed to contact. That list is locked in. They cannot add to it later without your approval.

STEP 02 — IN PLAIN ENGLISH

Your IT team installs the package in 30 minutes. From that moment, any attempt by the AI to contact an unlisted server is automatically blocked and logged. You don't have to do anything.

STEP 03 — IN PLAIN ENGLISH

When your auditor asks for evidence, you open the Sigilla dashboard, click Generate Report, and hand them a signed PDF. They can verify it themselves. The audit takes minutes, not weeks.

Technical detail below — for your IT team

01

Vendor packages their application

The vendor declares every network destination their app needs. That declaration becomes the enforcement policy. Sigilla allows only what was declared — everything else is blocked at the kernel level.

network:
  egress:
    - sensor-db.internal:5432

→ Everything else: DENY
02

You deploy — fully isolated

Drop the package into your Sigilla dashboard. Network isolation applies instantly. The vendor cannot bypass this — not with a software update, not intentionally. Every connection attempt is logged in real time.

iptables -A FORWARD \
  -s [app-ip] -j REJECT

iptables -I FORWARD \
  -d sensor-db:5432 -j ACCEPT
03

Generate your compliance report

Select ISO 27001, GDPR, or EU AI Act. Pick a date range. Click Generate. A cryptographically signed PDF downloads. Your auditor verifies it with openssl — no Sigilla account, no internet, no trust required.

openssl verify \
  -CAfile sigilla-public.pem \
  report-iso27001-q1-2026.pdf

→ OK ✅
vs. existing tools

Sigilla solves a different problem.

Credo AI, Holistic AI, Vanta — excellent for governing your own AI. None of them answer: what is the vendor's AI doing on your network right now?

CapabilitySigillaCredo AIHolistic AIVanta
Prove vendor AI network behaviour at runtime
Kernel-level isolation (cannot be bypassed)
Cryptographically signed compliance reportsPartialPartial
Works fully offline / air-gap
EU AI Act runtime oversight evidenceDocs onlyDocs only
Govern your own internal AI systems
The Vendor Conversation

What to ask your vendor.
Word for word.

The hardest part is often knowing how to start the conversation. Here's exactly what to say — and what to expect back.

YOU ASK
"We need to demonstrate to our ISO 27001 auditor that your application does not exfiltrate data from our network. Can you provide a Sigilla-compatible deployment package that declares all network destinations your application requires?"
IF THEY SAY YES
They send you a .sigilla package file. Your IT team drops it into the Sigilla dashboard. Network isolation is active within 30 minutes. You have compliance evidence from day one.
IF THEY SAY THEY DON'T KNOW SIGILLA
Forward them sigilla.io/vendor. The packaging process takes them half a day. You can also contact us — we'll reach out to them directly on your behalf.
IF THEY REFUSE OR PUSH BACK
A vendor who refuses to declare their network destinations is a vendor who cannot prove their application is safe. That refusal is itself a significant finding for your risk register — and worth escalating before the EU AI Act deadline.
For Your IT Team

What you need
to run this.

No new hardware. No cloud account. No specialist Linux knowledge beyond what your IT team already has.

REQUIREMENTS
  • Operating system Ubuntu 22.04 LTS (standard server)
  • CPU / RAM Standard x86 server — no GPU required
  • Network Internal network access only — air-gap capable
  • Internet Not required — works fully offline
  • Installation ~30 minutes per vendor application
  • Ongoing work Dashboard review + update approvals only
WHAT IT DOES NOT REQUIRE
  • No cloud subscription or SaaS account
  • No GPU or specialised hardware
  • No changes to your existing infrastructure
  • No data sent to Sigilla servers
  • No vendor-specific IT expertise
  • No ongoing maintenance beyond update approvals
If your team can run a Docker container on a Linux server, they can run Sigilla.
Regulatory Deadlines

The regulations driving
urgency right now.

EU AI Act

High-risk AI audit trails

Article 12 requires logs for the full lifetime of high-risk AI. Sigilla generates this automatically at runtime.

NIS2 Directive

Change management evidence

Critical infrastructure operators must demonstrate change management. Sigilla provides a 7-event update audit chain.

Already in force
GDPR Art. 32

Provable data sovereignty

Sovereignty must be demonstrable. Sigilla provides cryptographic proof that data never left your infrastructure.

Already enforced
ISO 27001

Network segmentation evidence

Control A.13 requires demonstrable network segmentation. Sigilla provides kernel-level enforcement plus signed evidence.

Audit standard
Platform

Everything your auditor
needs to say yes.

Kernel-level isolation

iptables default-deny that cannot be bypassed from inside the container. Enforcement at the OS level, not a policy document.

Real-time activity feed

Every connection logged within 10 seconds. Blocked calls to external servers show up immediately. You have never had this visibility.

60-second compliance reports

ISO 27001, GDPR, EU AI Act. Cryptographically signed PDFs. Independently verifiable with openssl — no account needed.

Cryptographic audit chain

Hash-linked event log. Tamper with any entry and the chain breaks — detectable by anyone with a terminal.

Update approval workflow

Every vendor update shows exactly what changed in the network policy. Your approval is logged — that's your EU AI Act human oversight evidence.

Air-gap capable

Zero internet required. Every site runs independently. Required for rail, critical infrastructure, and defence-adjacent environments.

What Sigilla Supports Today

Designed for contained AI.
Not everything. Deliberately.

Sigilla enforces a static network policy declared upfront. That works perfectly for a large class of industrial AI — and we want to be precise about what that class is, so you know exactly what you're buying.

✅ WORKS WELL WITH SIGILLA
  • Predictive maintenance models — reads sensor data, writes predictions to internal DB, no external calls
  • Quality inspection AI — analyses images from production line, outputs to internal dashboard
  • Demand forecasting — reads historical data, produces forecasts, all within your network
  • Document classification — processes internal documents, writes to internal index
  • Anomaly detection — monitors operational data streams, alerts to internal endpoints only
Common pattern: reads internal data → processes locally → writes to internal destination.
Network destinations are known, fixed, and few.
⚠ NOT YET SUPPORTED
  • Agentic AI with external tool calls — systems that call web APIs, search engines, or LLM providers at runtime
  • Large local models (70B+) — hardware declaration and VRAM validation not yet in manifest spec
  • Multi-step autonomous workflows — agents that spawn sub-processes or require runtime human intervention hooks
  • Dynamic network destinations — applications whose egress targets are only known at runtime
These are on the roadmap. Reach out if this is your use case — we want to understand your requirements.
What's Coming

The roadmap follows
where regulation is heading.

The EU AI Act is most concerned about high-risk agentic systems. That's exactly where we're building next — runtime intervention, local model governance, and dynamic policy.

PHASE 2 — 2026

Runtime intervention hooks

Pause, inspect, redirect, or stop an AI agent mid-task. The human oversight evidence that the EU AI Act actually requires for autonomous systems — not just update approvals.

PHASE 2 — 2026

Local model registry

Deploy approved local LLMs (Llama, Mistral, Phi) inside the same isolation boundary. VRAM and hardware requirements declared in the manifest. No external API calls required.

PHASE 3 — 2027

Agentic audit trail

Tool call logging, prompt/response hashing, decision provenance. The compliance evidence that regulators will require when an autonomous agent makes a consequential decision.

If agentic AI is your use case right now, we want to hear from you. Early conversations shape what we build first.
Tell us your use case →
Free Pilot Programme

Three months free.
No commitment.

We're accepting a small number of pilot customers for 2026. Free access in exchange for honest feedback — and one reference call if Sigilla delivers real value. The EU AI Act deadline is August 2026. Starting now gives you time to get it right.

No spam. Personal reply within 48 hours.

Aug '26
EU AI Act deadline
€35M
Max non-compliance fine
60s
Compliance report time
0
Cloud required