"We have an AI policy on our intranet."
We hear this constantly. And on paper, it sounds great — the company is aware of the regulation, wrote a document, shared it with employees. Checkbox ticked.
But when regulators come knocking, they won't ask whether you have a policy. They'll ask: can you prove you're following it?
That's where the problem begins. Between "we have rules" and "we have evidence" lies an enormous gap that most companies haven't closed yet.
What Regulators Actually Want
The EU AI Act doesn't just require you to have documents — it requires you to demonstrate a verifiable trail of your compliance activities. Specifically:
- Versioned documents with timestamps — when it was written, when updated, who approved it
- Documented process — not just the outcome, but the steps that led to it
- Proof of human oversight — that someone actually reviews AI system outputs, not just signs forms
- Currency — documentation that reflects the current state, not where things were 6 months ago
Article 11 of the EU AI Act explicitly requires technical documentation that is "updated when necessary" throughout the entire lifecycle of the AI system. Article 9 demands a documented risk management system that is "iterative" — meaning not a one-time document but a living process.
As one user on r/legaltech put it: "This is a huge gap right now. Most companies have 'we use AI responsibly' on their website but zero audit trail."
The GDPR Lesson: Policy Without Evidence = Fine
This isn't a new situation. GDPR was an identical test — and many companies failed.
Look at the pattern that repeated:
- Company writes a privacy policy — publishes it on the website, sends it to employees
- Regulator asks for evidence — when did you last conduct a DPIA? Where are the access logs? Who is the data protection officer and what do they actually do?
- Company can't produce evidence — the policy exists, but there's no trail of implementation
- Fine — not because they didn't have a policy, but because they couldn't prove they were following it
The most notable example: British Airways was fined £20 million in 2020. They had a security policy. They didn't have evidence of implementing it — missing logs, tests, documented incidents.
Marriott International paid £18.4 million in 2020 because they couldn't demonstrate they were conducting regular security reviews — despite having a written policy. The pattern is clear: regulators don't penalize ignorance, they penalize the lack of evidence.
The EU AI Act goes a step further than GDPR. Article 72 requires post-market monitoring with a documented plan. Article 73 requires incident reporting within defined timeframes. These aren't suggestions — they're obligations with concrete evidence requirements.
The same will repeat with the EU AI Act. Companies that have a policy but can't show an audit trail will be the first targets.
5 Self-Assessment Questions
Before reading further, answer these 5 questions:
1. Do you have an inventory of all AI systems you use? Not "we know we use ChatGPT" — a formal register with risk classification, responsible person, and date of last review.
2. Can you show when your AI documentation was last updated? A versioned document with a timestamp, not a Word file on someone's Desktop.
3. Is there evidence that someone reviews your AI system outputs? A review log, signature, comment — anything showing a human actually oversees the AI.
4. Do you have a documented process for when an AI system makes a mistake? An incident reporting procedure with concrete steps, not a generic statement about "reporting incidents."
5. Can you reconstruct your compliance state from 3 months ago? If a regulator asks "how were you compliant in January," can you prove it?
If you answered "no" to 3 or more questions — you have a policy problem, not an evidence problem. Or rather: you have traces but not evidence.
The Difference Between Traces and Evidence
This is the critical distinction that many miss.
Traces are any records that something happened:
- Git commits
- Email threads discussing AI policy
- Slack messages like "I reviewed the output"
- A Word document with a creation date
Evidence consists of structured, verifiable artifacts that prove compliance:
- Versioned documents with hash verification — cannot be retroactively modified
- Formal audit logs with timestamps and signatures
- Checklists with documented review of each item
- Evidence packages with integrity — everything in one place, verifiable
As one Hacker News user observed: "Anyone can generate an alternative chain of sha256 hashes" — even logging itself isn't sufficient without a mechanism guaranteeing integrity.
Traces are useful. But traces aren't evidence. Regulators don't ask "show me that you did something" — they ask "show me that you can prove what you did, when, and who was responsible."
How to Build Evidence Instead of Traces
The good news: you don't need to build infrastructure from scratch. You need a system that turns your compliance activities into verifiable evidence.
1. Hash-Verified Documents
Every generated document should have a cryptographic hash — a digital "fingerprint" proving the document hasn't been altered after creation. If a regulator asks "was this risk assessment written before the incident," a hash with a timestamp proves it.
2. Versioning with History
Not just "version 2.1" — full change history with dates, authors, and reasons for changes. When you update your AI Inventory Register because you added a new tool, that change must be documented.
3. Evidence Package
Instead of 10 scattered documents, a single package containing:
- All compliance documents for a specific AI system
- Compliance score with dimensions
- Audit trail of all changes
- Verification hash for each document
4. Four-Eyes Principle
Every critical document needs review by a second person — and that review must be documented. Not "a colleague looked at it," but formal verification with a checklist.
Deadlines Are Approaching — But Not Equally for Everyone
While Omnibus VII delayed deadlines for high-risk AI systems (December 2027 for standalone, August 2028 for embedded), three categories of obligations are already active:
- Prohibited practices (Art. 5) — since February 2025
- AI literacy (Art. 4) — since February 2025
- Transparency (Art. 50) — November 2026 (just 8 months away)
For these categories, "we'll start preparing when the time comes" is already too late. You need evidence today.
Next Step
Stop thinking about compliance as a document you need to write. Start thinking about it as a process you need to prove.
- Check your AI compliance readiness → — 9-step questionnaire with compliance score
- Quick risk check → — find out your risk category in 2 minutes
- Check your AI literacy readiness → — free assessment
Sources: r/legaltech — Traces vs Evidence thread, Hacker News — Article 12 Logging, Martin Warner — EU AI Act Enforcement, HSF Kramer — Transparency Obligations