All articles

How to Classify AI Tools Under the EU AI Act

A practical guide to classifying popular AI tools (ChatGPT, Copilot, AI CRM) according to EU AI Act risk categories.

3 min read
ComplianceForge AI

Most companies today use at least one AI tool. But do you know which EU AI Act risk category your tools fall into? Here's a practical guide.

ChatGPT, Claude, Gemini — for Content Writing

Classification: Minimal Risk

Using generative AI for writing marketing content, emails, or internal documents falls into the minimal risk category. No specific regulatory obligations.

But beware: if you use the same tools for making decisions that affect people (e.g., evaluating job candidates), the classification changes.

AI in HR — Resume Screening

Classification: HIGH RISK (Annex III, Category 4)

AI systems for recruitment — CV screening, candidate ranking, automated selection — fall into high risk under Annex III, Category 4 of the EU AI Act.

Obligations include:

  • Risk management system (Article 9)
  • Data governance (Article 10)
  • Technical documentation (Article 11)
  • Human oversight (Article 14)
  • Transparency toward candidates

AI in CRM — Lead Scoring and Profiling

Classification: HIGH RISK

If your CRM uses AI for profiling individuals (e.g., lead scoring based on behavior), it's automatically high risk under Article 6(2). Profiling of natural persons has no exemptions.

Customer Support Chatbot

Classification: Limited Risk (Article 50)

An AI chatbot that directly communicates with users must inform them that they are talking to AI, not a human. This is a transparency obligation under Article 50.

Midjourney, DALL-E — Image Generation

Classification: Limited Risk (Article 50)

AI-generated content must be labeled as artificially generated in a machine-readable format. This applies to images, video, audio, and text.

GitHub Copilot — Code Generation

Classification: Minimal Risk

Code generation tools fall into minimal risk because they don't affect decisions about people and don't directly interact with end users.

Article 6(3) — The "Not Really High-Risk" Exemption

Even if your AI tool falls into an Annex III category, it may be exempt from high risk if it meets all of these conditions:

  1. It performs a narrow procedural task
  2. It improves the result of prior human work
  3. It has always-present human oversight
  4. It does not profile natural persons

If it does profiling — no exemption, regardless of the other factors.


Not sure which category your AI tools fall into? ComplianceForge AI automatically classifies them in 15 minutes.

Want to know your compliance status?

Free questionnaire, AI classification and compliance score in 30 minutes.