"We have AI features. We're nowhere near ready." — that's how one SaaS founder on Reddit described their EU AI Act situation. If this sounds familiar, you're not alone.
Most small and medium businesses (SMBs) use AI tools daily — ChatGPT for emails, Midjourney for visuals, AI chatbots for customer support — but few think about regulatory obligations. And those obligations are already in effect.
Article 4 (AI literacy) and Article 5 (prohibited practices) have been active since February 2025. Penalties: up to €35 million or 7% of global turnover for prohibited practices, up to €15 million or 3% for non-compliance with AI literacy requirements.
The good news: getting compliant isn't as complicated as it sounds. In this guide, we show you how to build the foundations of a compliance program in 7 days — without a legal team, without consultants charging €500/hour, without panic.
Who Needs to Worry About the EU AI Act?
In short: anyone who uses or develops AI systems within the EU or for EU users.
As Martin Warner highlighted in his video watched by 157,000 people: "Thinking your business is too small to be affected? Think again. If you're selling to or processing data from even one EU customer, you're in the game."
This includes:
- A marketing agency using AI for content generation
- A SaaS startup with AI features in its product
- An accounting firm using AI for document analysis
- Any company using ChatGPT, Copilot, or similar tools in business processes
How much does it cost? According to EU Made Simple analysis, compliance for SMBs costs 1-2.7% of annual revenue. But non-compliance costs more. Significantly more.
Days 1-2: Create Your AI Systems Inventory
Before you can be compliant, you need to know what you're actually using. Most companies don't have the full picture — AI tools creep into daily work without formal approval.
What to do:
-
List all AI tools — not just the ones you purchased. Ask every department: marketing, sales, HR, IT, customer support. Common ones companies forget:
- Grammarly or ChatGPT that employees use "privately" for business tasks
- AI features embedded in existing tools (Notion AI, Canva Magic, Excel Copilot)
- Chatbots on your website
- AI for automated email or ticket processing
-
For each tool, document:
- Who uses it and in which department
- What it's used for (purpose)
- What data it processes (personal, business, public)
- Who is the provider and do they have compliance documentation
-
Check governance: Is there a process for approving new AI tools? If not — this is the first gap to close.
Day 1-2 output: Complete list of AI systems with basic metadata.
Tip: Our compliance questionnaire walks you through this process step by step — steps 1-3 cover exactly the inventory and categorization of AI tools.
Day 3: Classify the Risk of Each System
The EU AI Act categorizes AI systems into 4 risk levels:
| Level | Description | Example | Obligations |
|---|---|---|---|
| Prohibited | AI practices that are completely banned | Employee social scoring, manipulative AI | Ban on use, fines up to €35M |
| High risk | AI in critical areas | AI for hiring, credit scoring, biometrics | Full documentation, human oversight, risk management |
| Limited risk | AI requiring transparency | Chatbots, AI-generated content | Obligation to label and inform users |
| Minimal risk | Most common AI tools | Spam filters, content recommendations | No specific obligations (except AI literacy) |
What to do:
-
For each tool in your inventory, determine the risk level. Key questions:
- Does the AI affect rights or access to services? (hiring, credits, education → high risk)
- Does the AI communicate with people who might not know they're talking to AI? (chatbot → limited risk)
- Does it generate content someone could mistake for human-made? (deepfakes, synthetic text → limited risk)
-
Specifically check Article 5 — prohibited practices. Are you certain no tool is doing:
- Emotion recognition in the workplace
- Social scoring of employees or customers
- Manipulation of vulnerable groups
- Non-consensual sexual content generation (nudifier tools — new ban from Omnibus VII)
Day 3 output: Every AI system has an assigned risk level.
Tip: Our free Quick Check gives you an instant classification — enter a description of your AI system and get the risk category in 2 minutes.
Days 4-5: Generate Key Documents
Documentation is the core of EU AI Act compliance. Good news for SMBs: you don't need all documents at once. Start with the three most important:
1. AI Inventory Register (Annex IV)
A formalized list of all AI systems — what you built in days 1-2, but in a structured format. Includes:
- System name and version
- Provider and contact
- Purpose and context of use
- Risk level
- Responsible person
2. AI Acceptable Use Policy (Articles 4, 26)
An internal document defining rules for AI use in your company:
- Which AI tools are approved
- Process for introducing new AI tools
- What employees can and cannot do with AI
- How to handle incorrect AI output
3. AI Risk Assessment (Article 9)
Risk assessment for each high or limited risk AI system:
- Identified risks and their likelihood
- Potential impact on users and fundamental rights
- Mitigation measures for each risk
- Who is responsible for monitoring
What to do:
For each document: gather data from your inventory (days 1-2) and classification (day 3), then generate a draft.
Days 4-5 output: Three key compliance documents in draft form.
Tip: ComplianceForge AI automatically generates all 10 compliance documents based on your questionnaire answers — including these three, plus Transparency Notice, FRIA, AI Literacy Program, and more.
Day 6: Create Your Action Plan
Documents are the foundation, but compliance is a living process. Day 6 is for turning documents into concrete actions.
What to do:
-
From your risk assessment, extract all identified risks and for each define:
- A concrete mitigation action
- Responsible person
- Implementation deadline
- Verification criteria (how will you know it's done)
-
Prioritize by urgency:
- Immediately: Everything related to Article 5 (prohibited practices) — already active
- This month: AI literacy program (Article 4) — also already active
- By November 2026: Transparency obligations (Article 50) — AI content labeling
- By December 2027: High-risk compliance (for standalone systems)
-
Set up a recurring review — compliance isn't a one-time project. We recommend a monthly status review.
Day 6 output: Structured action plan with deadlines and responsible persons.
Day 7: Review, Verification, and Delegation
The last day is for quality and accountability.
What to do:
-
Review all documents with at least one other person (four-eyes principle). One person writes, another verifies. Verification questions:
- Have we listed all AI systems? (check with department leads)
- Is the risk classification correct? (compare with regulation examples)
- Do the documents cover all identified risks?
-
Delegate responsibilities:
- Who monitors regulatory changes?
- Who approves new AI tools?
- Who conducts AI literacy training?
- Who is the contact for regulators if they request information?
-
Document decisions — not because the regulator demands it, but because in 6 months you won't remember why you decided something.
Day 7 output: Verified documents, clear responsibilities, next review scheduled.
After 7 Days: What's Next?
Congratulations — you have the foundations. But this isn't a "set and forget" situation.
Ongoing obligations:
- Monthly: Review your inventory — new AI tools? Changes in usage?
- Quarterly: Update risk assessment — new risks? New measures?
- When regulation changes: Check the impact on your compliance (like the Omnibus VII changes)
- When introducing a new AI tool: Run classification before use
Key deadlines for 2026-2028:
| Deadline | Obligation | Status |
|---|---|---|
| February 2025 | Prohibited practices (Art. 5), AI literacy (Art. 4) | ALREADY ACTIVE |
| August 2025 | GPAI model obligations | ALREADY ACTIVE |
| November 2026 | Transparency obligations (Art. 50) | 8 months away |
| December 2027 | High-risk standalone systems | 21 months away |
| August 2028 | High-risk embedded systems | 29 months away |
You Don't Have to Do This Alone
This guide provides the framework, but creating compliance documents from scratch requires knowledge of regulatory requirements, mapping to your specific context, and a lot of copy-pasting from legal sources.
ComplianceForge AI automates that process:
- Compliance questionnaire (15 min) — 9 steps covering all aspects of your AI usage
- Automatic classification — decision tree following EU AI Act logic (Art. 5 → Art. 6 → Annex III → Art. 50)
- Compliance score — see where you stand across 8 dimensions (inventory, policy, oversight, documentation...)
- 10 generated documents — from AI inventory to technical documentation, tailored to your answers
- Action plan — concrete steps with deadlines, delegation, and verification
The questionnaire takes 15 minutes. You get results immediately.
- Start the compliance questionnaire →
- Or try the free Quick Check → — find out your risk category in 2 minutes
Sources: EU AI Act — European Parliament, Martin Warner — YouTube, EU Made Simple — YouTube, LegalNodes, Sombra