EU AI Act Risk Classifier
The EU AI Act (Regulation (EU) 2024/1689) entered into force on 1 August 2024. It creates a risk-based framework with four tiers: Prohibited AI (Art. 5), High-Risk AI (Art. 6 and Annex III), Limited-Risk AI (Art. 50), and Minimal-Risk AI. The most critical deadline: full obligations for Annex III high-risk systems: falls on 2 August 2026, less than four months from today. Prohibited AI has been enforceable since 2 February 2025. General-Purpose AI (GPAI) model obligations have applied since 2 August 2025. This tool uses the classification rules directly from the Regulation to give you an instant, deterministic risk tier and a full obligations checklist. No registration. No consulting fee.
Classificatore EU AI Act
Livello di rischio per il Suo sistema di IA ai sensi del Regolamento (UE) 2024/1689
Selezioni la migliore descrizione del Suo sistema di IA. Questo determina il percorso di classificazione.
Il Suo sistema corrisponde a un caso d'uso vietato dall'Art. 5?
L'Art. 5 vieta: (1) identificazione biometrica remota in tempo reale in spazi pubblici; (2) sistemi di punteggio sociale da parte di autorità pubbliche; (3) tecniche subliminali o manipolative che compromettono l'autonomia; (4) sfruttamento delle vulnerabilità di gruppi specifici (età, disabilità); (5) raccolta indiscriminata di immagini facciali da internet o CCTV per creare database di riconoscimento; (6) inferenza di emozioni sul luogo di lavoro o nelle istituzioni educative (con limitate eccezioni); (7) previsione di reati futuri basata sulla profilazione; (8) categorizzazione biometrica per caratteristiche sensibili (razza, religione, opinioni politiche, orientamento sessuale).
L'Annex III definisce 8 settori ad alto rischio ai sensi dell'Art. 6(2). I sistemi utilizzati in queste aree sono presunti ad alto rischio salvo documentazione dell'eccezione Art. 6(3).
L'Art. 50 richiede misure di trasparenza specifiche per determinati tipi di sistema, indipendentemente dal livello di rischio. Anche i sistemi a rischio minimo devono conformarsi se interagiscono con le persone in questi modi.
Classificazione basata sul Regolamento (UE) 2024/1689 (AI Act). L'eccezione dell'Art. 6(3) richiede una valutazione formale documentata del rischio. Non costituisce consulenza legale.
Frequently asked questions
What are the four risk tiers under the EU AI Act?
The AI Act creates four tiers. Prohibited AI (Art. 5): systems that are banned outright: real-time biometric identification in public spaces, social scoring by public authorities, subliminal manipulation, and others. High-Risk AI (Art. 6, Annex III): systems in eight regulated sectors including biometrics, critical infrastructure, education, employment, credit scoring, law enforcement, migration, and justice. Limited-Risk AI (Art. 50): chatbots, deepfakes, and emotion recognition systems with transparency obligations. Minimal-Risk AI: all other systems with no mandatory obligations.
What is the 2 August 2026 deadline and who does it apply to?
2 August 2026 is the date by which providers and deployers of High-Risk AI systems listed in Annex III must be fully compliant with Articles 9–15 (risk management, data governance, technical documentation, logging, transparency, human oversight, accuracy and robustness), Article 43 (conformity assessment), Article 47 (EU declaration of conformity), Article 49 (CE marking), and Article 71 (EU AI database registration). If your AI system is used in any of the eight Annex III sectors: biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, or justice: this deadline applies to you.
What is a General-Purpose AI (GPAI) model and what obligations apply?
A GPAI model is an AI model trained on large amounts of data using self-supervision and capable of serving multiple purposes. Under Art. 53, all GPAI model providers must maintain technical documentation, provide information to downstream providers, establish a copyright compliance policy, and publish a training data summary. GPAI models with training compute above 10²⁵ FLOPs are classified as systemic-risk models under Art. 51 and face additional obligations under Art. 55: adversarial testing, incident reporting to the AI Office, and active systemic risk mitigation. These obligations have applied since 2 August 2025.
What is the Art. 6(3) exception and when does it apply?
Art. 6(3) provides that an AI system listed in Annex III is NOT high-risk if it poses no significant risk of harm to the health, safety, or fundamental rights of natural persons. This exception requires a formal, documented assessment: it cannot be self-declared without analysis. The burden of proof lies with the provider. Regulators will presume a system in an Annex III sector is high-risk unless the documented exception exists. This tool flags the exception if you select it, but the classification should be verified by qualified legal counsel.
What are the fines for non-compliance with the EU AI Act?
Fines are tiered by violation type. For Prohibited AI (Art. 5 violations): up to €35 million or 7% of global annual turnover, whichever is higher. For High-Risk AI non-compliance (incorrect information to notified bodies): up to €15 million or 3% of global turnover. For GPAI model providers with systemic risk: up to €15 million or 3% of global turnover. For Limited-Risk systems (Art. 50 transparency violations): up to €7.5 million or 1.5% of global turnover. National market surveillance authorities enforce these fines, and the European AI Office has direct supervisory authority over GPAI models.
Does the EU AI Act apply to non-EU companies?
Yes. Like the GDPR, the AI Act has extraterritorial reach. It applies if the output of your AI system is used in the EU, regardless of where the provider or deployer is established. A US startup whose AI system affects EU persons: for example a credit scoring model used by a European bank, or a hiring AI used to screen EU job applicants: falls within scope. Non-EU providers must appoint an EU representative (Art. 22) and may need to register in the EU AI database.