Understanding EU AI Act Risk Categories: A Complete Guide
Learn about the four risk categories defined by the EU AI Act and how to classify your AI system correctly.
The EU AI Act introduces a risk-based approach to AI regulation, categorizing AI systems into four distinct risk levels. Understanding these categories is crucial for compliance.
The Four Risk Categories
1. Unacceptable Risk
AI systems that pose a clear threat to safety, livelihoods, and rights of people. These are prohibited outright.
Examples:
- Social scoring by governments
- Real-time biometric identification in public spaces (with limited exceptions)
- Exploitation of vulnerabilities of specific groups
- Subliminal techniques that distort behavior
2. High Risk
AI systems that could significantly impact health, safety, or fundamental rights.
Key Areas:
- Critical infrastructure
- Educational or vocational training
- Employment and worker management
- Essential services (credit, insurance)
- Law enforcement
- Migration and border control
- Administration of justice
Requirements:
- Risk management system
- Data governance
- Technical documentation
- Record-keeping
- Transparency and information provision
- Human oversight
- Accuracy, robustness, and cybersecurity
3. Limited Risk
AI systems with transparency obligations.
Examples:
- Chatbots (must inform users they're interacting with AI)
- Emotion recognition systems
- Biometric categorization systems
- Deep fakes (must be labeled)
4. Minimal Risk
The vast majority of AI systems fall into this category with no obligations.
Examples:
- AI-enabled video games
- Spam filters
- Inventory management systems
How to Classify Your System
- Identify the primary purpose of your AI system
- Assess the context in which it will be used
- Evaluate potential impacts on individuals
- Consider the sector of deployment
- Review specific use cases mentioned in the Act
Timeline for Compliance
- August 2024: AI Act enters into force
- February 2025: Prohibitions on unacceptable risk AI
- May 2025: Transparency obligations for limited risk systems
- August 2025: Obligations for general-purpose AI models
- August 2026: Full application for high-risk systems
Next Steps
Use our risk classifier tool to determine your AI system's category and understand your compliance obligations.
Ready to assess your AI system?
Use our free tool to classify your AI system under the EU AI Act and understand your compliance obligations.
Start Risk Assessment →Related Articles
EU AI Act Enters Into Force: Timeline and Implementation Phases
On August 1, 2024, the EU AI Act officially entered into force. Here's what this milestone means and the timeline ahead.
EU AI Act for Startups: Practical Compliance Guide
A startup-friendly guide to navigating EU AI Act compliance without breaking the bank.
General-Purpose AI Models: New Obligations Under the EU AI Act
Understanding the specific requirements for general-purpose AI models and foundation models.