guides··8 min read

Understanding EU AI Act Risk Categories: A Complete Guide

Learn about the four risk categories defined by the EU AI Act and how to classify your AI system correctly.

By EU AI Compliance Team
#risk-assessment#compliance#regulation

The EU AI Act introduces a risk-based approach to AI regulation, categorizing AI systems into four distinct risk levels. Understanding these categories is crucial for compliance.

The Four Risk Categories

1. Unacceptable Risk

AI systems that pose a clear threat to safety, livelihoods, and rights of people. These are prohibited outright.

Examples:

  • Social scoring by governments
  • Real-time biometric identification in public spaces (with limited exceptions)
  • Exploitation of vulnerabilities of specific groups
  • Subliminal techniques that distort behavior

2. High Risk

AI systems that could significantly impact health, safety, or fundamental rights.

Key Areas:

  • Critical infrastructure
  • Educational or vocational training
  • Employment and worker management
  • Essential services (credit, insurance)
  • Law enforcement
  • Migration and border control
  • Administration of justice

Requirements:

  • Risk management system
  • Data governance
  • Technical documentation
  • Record-keeping
  • Transparency and information provision
  • Human oversight
  • Accuracy, robustness, and cybersecurity

3. Limited Risk

AI systems with transparency obligations.

Examples:

  • Chatbots (must inform users they're interacting with AI)
  • Emotion recognition systems
  • Biometric categorization systems
  • Deep fakes (must be labeled)

4. Minimal Risk

The vast majority of AI systems fall into this category with no obligations.

Examples:

  • AI-enabled video games
  • Spam filters
  • Inventory management systems

How to Classify Your System

  1. Identify the primary purpose of your AI system
  2. Assess the context in which it will be used
  3. Evaluate potential impacts on individuals
  4. Consider the sector of deployment
  5. Review specific use cases mentioned in the Act

Timeline for Compliance

  • August 2024: AI Act enters into force
  • February 2025: Prohibitions on unacceptable risk AI
  • May 2025: Transparency obligations for limited risk systems
  • August 2025: Obligations for general-purpose AI models
  • August 2026: Full application for high-risk systems

Next Steps

Use our risk classifier tool to determine your AI system's category and understand your compliance obligations.

Ready to assess your AI system?

Use our free tool to classify your AI system under the EU AI Act and understand your compliance obligations.

Start Risk Assessment →

Related Articles