EU AI Act Risk Classifier

The EU AI Act (Regulation (EU) 2024/1689) entered into force on 1 August 2024. It creates a risk-based framework with four tiers: Prohibited AI (Art. 5), High-Risk AI (Art. 6 and Annex III), Limited-Risk AI (Art. 50), and Minimal-Risk AI. The most critical deadline: full obligations for Annex III high-risk systems: falls on 2 August 2026, less than four months from today. Prohibited AI has been enforceable since 2 February 2025. General-Purpose AI (GPAI) model obligations have applied since 2 August 2025. This tool uses the classification rules directly from the Regulation to give you an instant, deterministic risk tier and a full obligations checklist. No registration. No consulting fee.

Clasificador EU AI Act

Nivel de riesgo para su sistema de IA según el Reglamento (UE) 2024/1689

Seleccione la mejor descripción de su sistema de IA. Esto determina la vía de clasificación.

¿Su sistema coincide con algún caso de uso prohibido del Art. 5?

El Art. 5 prohíbe: (1) identificación biométrica remota en tiempo real en espacios públicos; (2) sistemas de puntuación social por autoridades públicas; (3) técnicas subliminales o manipulativas que menoscaben la autonomía; (4) explotación de vulnerabilidades de grupos específicos (edad, discapacidad); (5) recopilación no dirigida de imágenes faciales de internet o CCTV para crear bases de datos de reconocimiento; (6) inferencia de emociones en el lugar de trabajo o instituciones educativas (con excepciones limitadas); (7) predicción de delitos futuros basada en perfilado; (8) categorización biométrica por características sensibles (raza, religión, opiniones políticas, orientación sexual).

El Anexo III define 8 sectores de alto riesgo según el Art. 6(2). Los sistemas utilizados en estas áreas se presumen de alto riesgo salvo que se documente la excepción del Art. 6(3).

El Art. 50 exige medidas de transparencia específicas para ciertos tipos de sistemas, independientemente del nivel de riesgo. Incluso los sistemas de riesgo mínimo deben cumplir si interactúan con personas de estas formas.

La clasificación se basa en el Reglamento (UE) 2024/1689 (AI Act). La excepción del Art. 6(3) requiere una evaluación de riesgos formal documentada. No constituye asesoramiento jurídico.

Frequently asked questions

What are the four risk tiers under the EU AI Act?

The AI Act creates four tiers. Prohibited AI (Art. 5): systems that are banned outright: real-time biometric identification in public spaces, social scoring by public authorities, subliminal manipulation, and others. High-Risk AI (Art. 6, Annex III): systems in eight regulated sectors including biometrics, critical infrastructure, education, employment, credit scoring, law enforcement, migration, and justice. Limited-Risk AI (Art. 50): chatbots, deepfakes, and emotion recognition systems with transparency obligations. Minimal-Risk AI: all other systems with no mandatory obligations.

What is the 2 August 2026 deadline and who does it apply to?

2 August 2026 is the date by which providers and deployers of High-Risk AI systems listed in Annex III must be fully compliant with Articles 9–15 (risk management, data governance, technical documentation, logging, transparency, human oversight, accuracy and robustness), Article 43 (conformity assessment), Article 47 (EU declaration of conformity), Article 49 (CE marking), and Article 71 (EU AI database registration). If your AI system is used in any of the eight Annex III sectors: biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, or justice: this deadline applies to you.

What is a General-Purpose AI (GPAI) model and what obligations apply?

A GPAI model is an AI model trained on large amounts of data using self-supervision and capable of serving multiple purposes. Under Art. 53, all GPAI model providers must maintain technical documentation, provide information to downstream providers, establish a copyright compliance policy, and publish a training data summary. GPAI models with training compute above 10²⁵ FLOPs are classified as systemic-risk models under Art. 51 and face additional obligations under Art. 55: adversarial testing, incident reporting to the AI Office, and active systemic risk mitigation. These obligations have applied since 2 August 2025.

What is the Art. 6(3) exception and when does it apply?

Art. 6(3) provides that an AI system listed in Annex III is NOT high-risk if it poses no significant risk of harm to the health, safety, or fundamental rights of natural persons. This exception requires a formal, documented assessment: it cannot be self-declared without analysis. The burden of proof lies with the provider. Regulators will presume a system in an Annex III sector is high-risk unless the documented exception exists. This tool flags the exception if you select it, but the classification should be verified by qualified legal counsel.

What are the fines for non-compliance with the EU AI Act?

Fines are tiered by violation type. For Prohibited AI (Art. 5 violations): up to €35 million or 7% of global annual turnover, whichever is higher. For High-Risk AI non-compliance (incorrect information to notified bodies): up to €15 million or 3% of global turnover. For GPAI model providers with systemic risk: up to €15 million or 3% of global turnover. For Limited-Risk systems (Art. 50 transparency violations): up to €7.5 million or 1.5% of global turnover. National market surveillance authorities enforce these fines, and the European AI Office has direct supervisory authority over GPAI models.

Does the EU AI Act apply to non-EU companies?

Yes. Like the GDPR, the AI Act has extraterritorial reach. It applies if the output of your AI system is used in the EU, regardless of where the provider or deployer is established. A US startup whose AI system affects EU persons: for example a credit scoring model used by a European bank, or a hiring AI used to screen EU job applicants: falls within scope. Non-EU providers must appoint an EU representative (Art. 22) and may need to register in the EU AI database.