EU AI Act Risk Classifier
The EU AI Act (Regulation (EU) 2024/1689) entered into force on 1 August 2024. It creates a risk-based framework with four tiers: Prohibited AI (Art. 5), High-Risk AI (Art. 6 and Annex III), Limited-Risk AI (Art. 50), and Minimal-Risk AI. The most critical deadline: full obligations for Annex III high-risk systems: falls on 2 August 2026, less than four months from today. Prohibited AI has been enforceable since 2 February 2025. General-Purpose AI (GPAI) model obligations have applied since 2 August 2025. This tool uses the classification rules directly from the Regulation to give you an instant, deterministic risk tier and a full obligations checklist. No registration. No consulting fee.
EU AI Act Classifier
Risk tier for your AI system under Regulation (EU) 2024/1689
Select the best description of your AI system. This determines the classification track.
Does your system match any Art. 5 prohibited use case?
Art. 5 prohibits: (1) real-time remote biometric identification in public spaces; (2) social scoring systems by public authorities; (3) subliminal or manipulative techniques that impair autonomy; (4) exploiting vulnerabilities of specific groups (age, disability); (5) untargeted facial scraping from internet or CCTV to build recognition databases; (6) inferring emotions in workplace or educational institutions (with narrow exceptions); (7) predicting future crimes based on profiling; (8) biometric categorization by sensitive characteristics (race, religion, political opinions, sexual orientation).
Annex III defines 8 high-risk sectors under Art. 6(2). Systems used in these areas are presumed high-risk unless the Art. 6(3) exception is documented.
Art. 50 requires specific transparency measures for certain system types, regardless of risk tier. Even minimal-risk systems must comply if they interact with persons in these ways.
Classification is based on Regulation (EU) 2024/1689 (AI Act). The Art. 6(3) exception requires formal documented risk assessment. Not legal advice.
Frequently asked questions
What are the four risk tiers under the EU AI Act?
The AI Act creates four tiers. Prohibited AI (Art. 5): systems that are banned outright: real-time biometric identification in public spaces, social scoring by public authorities, subliminal manipulation, and others. High-Risk AI (Art. 6, Annex III): systems in eight regulated sectors including biometrics, critical infrastructure, education, employment, credit scoring, law enforcement, migration, and justice. Limited-Risk AI (Art. 50): chatbots, deepfakes, and emotion recognition systems with transparency obligations. Minimal-Risk AI: all other systems with no mandatory obligations.
What is the 2 August 2026 deadline and who does it apply to?
2 August 2026 is the date by which providers and deployers of High-Risk AI systems listed in Annex III must be fully compliant with Articles 9–15 (risk management, data governance, technical documentation, logging, transparency, human oversight, accuracy and robustness), Article 43 (conformity assessment), Article 47 (EU declaration of conformity), Article 49 (CE marking), and Article 71 (EU AI database registration). If your AI system is used in any of the eight Annex III sectors: biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, or justice: this deadline applies to you.
What is a General-Purpose AI (GPAI) model and what obligations apply?
A GPAI model is an AI model trained on large amounts of data using self-supervision and capable of serving multiple purposes. Under Art. 53, all GPAI model providers must maintain technical documentation, provide information to downstream providers, establish a copyright compliance policy, and publish a training data summary. GPAI models with training compute above 10²⁵ FLOPs are classified as systemic-risk models under Art. 51 and face additional obligations under Art. 55: adversarial testing, incident reporting to the AI Office, and active systemic risk mitigation. These obligations have applied since 2 August 2025.
What is the Art. 6(3) exception and when does it apply?
Art. 6(3) provides that an AI system listed in Annex III is NOT high-risk if it poses no significant risk of harm to the health, safety, or fundamental rights of natural persons. This exception requires a formal, documented assessment: it cannot be self-declared without analysis. The burden of proof lies with the provider. Regulators will presume a system in an Annex III sector is high-risk unless the documented exception exists. This tool flags the exception if you select it, but the classification should be verified by qualified legal counsel.
What are the fines for non-compliance with the EU AI Act?
Fines are tiered by violation type. For Prohibited AI (Art. 5 violations): up to €35 million or 7% of global annual turnover, whichever is higher. For High-Risk AI non-compliance (incorrect information to notified bodies): up to €15 million or 3% of global turnover. For GPAI model providers with systemic risk: up to €15 million or 3% of global turnover. For Limited-Risk systems (Art. 50 transparency violations): up to €7.5 million or 1.5% of global turnover. National market surveillance authorities enforce these fines, and the European AI Office has direct supervisory authority over GPAI models.
Does the EU AI Act apply to non-EU companies?
Yes. Like the GDPR, the AI Act has extraterritorial reach. It applies if the output of your AI system is used in the EU, regardless of where the provider or deployer is established. A US startup whose AI system affects EU persons: for example a credit scoring model used by a European bank, or a hiring AI used to screen EU job applicants: falls within scope. Non-EU providers must appoint an EU representative (Art. 22) and may need to register in the EU AI database.