EU AI Act Compliance: Risk Classification, Requirements, and Timeline
A practical guide to the EU AI Act (Regulation 2024/1689) - risk tiers, prohibited practices, high-risk obligations, GPAI rules, enforcement timeline, and fines.
What is the EU AI Act?
The EU AI Act (Regulation 2024/1689) is the world’s first comprehensive legal framework for artificial intelligence. It entered into force on 1 August 2024 and applies in phases through August 2027. The regulation classifies AI systems by risk level and imposes obligations proportional to the potential for harm.
This is not a voluntary framework or a set of guidelines. It is binding EU law with fines up to EUR 35 million or 7% of global annual turnover. If your organization develops, deploys, imports, or distributes AI systems in the EU market, the AI Act applies to you.
Risk tier classification
The AI Act uses a four-tier risk classification system.
| Risk Tier | Examples | Obligations | Enforcement Date |
|---|---|---|---|
| Prohibited | Social scoring by governments, real-time biometric identification in public spaces (with exceptions), emotion inference in workplaces/schools, biometric categorization inferring sensitive attributes | Banned outright | 2 February 2025 |
| High-risk | AI in recruitment/HR decisions, credit scoring, insurance pricing, critical infrastructure management, educational grading, migration/asylum processing, law enforcement | Full conformity assessment, registration, monitoring, transparency | 2 August 2026 (Annex III systems) |
| Limited risk | Chatbots, deepfake generators, emotion recognition systems (outside prohibited scope) | Transparency obligations - users must be told they are interacting with AI | 2 August 2026 |
| Minimal risk | Spam filters, AI in video games, inventory management | No specific obligations (voluntary codes of conduct encouraged) | N/A |
What is prohibited (already in effect)
Since 2 February 2025, the following AI practices are banned in the EU:
- Social scoring: AI systems that evaluate or classify people based on social behavior or personality characteristics, leading to detrimental treatment.
- Exploitative AI: Systems that exploit vulnerabilities of specific groups (children, disabled persons, economically vulnerable people).
- Untargeted facial recognition database scraping: Building or expanding facial recognition databases through untargeted scraping of images from the internet or CCTV.
- Emotion inference in workplaces and schools: AI systems that infer emotions of employees or students, with limited exceptions for safety or medical purposes.
- Biometric categorization inferring sensitive attributes: Systems that categorize individuals based on biometric data to infer race, political opinions, trade union membership, religious beliefs, or sexual orientation.
- Predictive policing based solely on profiling: AI systems that assess the risk of a person committing a criminal offense based solely on profiling or personality traits.
If your organization uses any of these AI applications, you must stop immediately. There is no grace period.
High-risk AI system obligations
If your AI system is classified as high-risk (Annex III, or safety components under Annex I), you must comply with these requirements by 2 August 2026:
For providers (developers) of high-risk AI systems:
-
Risk management system: Establish and maintain a risk management system throughout the AI system’s lifecycle. This must identify known and foreseeable risks, estimate and evaluate risks, and adopt risk mitigation measures.
-
Data governance: Training, validation, and testing datasets must be relevant, representative, free of errors, and complete. Bias detection and correction procedures are mandatory.
-
Technical documentation: Maintain detailed technical documentation that demonstrates compliance before the system is placed on the market. This must be kept up to date.
-
Record-keeping: Design systems to automatically record events (logs) relevant to identifying risks and facilitating post-market monitoring.
-
Transparency: Provide clear instructions for use. Deployers must understand the system’s capabilities, limitations, and intended purpose.
-
Human oversight: Design systems to allow effective human oversight. Humans must be able to understand the system’s output and decide not to use it or override it.
-
Accuracy, robustness, and cybersecurity: Systems must achieve appropriate levels of accuracy and be resilient to errors, faults, and attempts at manipulation by unauthorized third parties.
-
Conformity assessment: Complete a conformity assessment (self-assessment for most categories, third-party assessment for biometric systems and critical infrastructure).
-
EU database registration: Register the system in the EU database for high-risk AI systems before placing it on the market.
-
Post-market monitoring: Establish a post-market monitoring system proportionate to the nature of the AI system.
For deployers (users) of high-risk AI systems:
- Use the system according to the provider’s instructions.
- Ensure human oversight by competent, trained personnel.
- Monitor operation and report serious incidents to the provider and national authority.
- Conduct a fundamental rights impact assessment before deploying high-risk AI in certain public-facing contexts.
- Inform natural persons that they are subject to a high-risk AI system decision (transparency obligation).
GPAI (General Purpose AI) obligations
Since 2 August 2025, providers of General Purpose AI models (e.g., large language models) must comply with additional rules:
Standard GPAI obligations:
- Maintain and make available technical documentation.
- Provide information and documentation to downstream providers who integrate the GPAI into their systems.
- Establish a policy to respect EU copyright law.
- Publish a sufficiently detailed summary of training data content.
GPAI with systemic risk (models trained with >10^25 FLOPs):
- All standard obligations plus:
- Perform model evaluations including adversarial testing.
- Track and report serious incidents.
- Ensure adequate cybersecurity protections.
- Report energy consumption of the model.
Enforcement timeline
| Date | What applies |
|---|---|
| 1 August 2024 | AI Act enters into force |
| 2 February 2025 | Prohibited AI practices banned; AI literacy obligations apply |
| 2 August 2025 | GPAI model obligations apply; governance and notified body rules apply |
| 2 August 2026 | High-risk AI system obligations apply (Annex III systems); limited-risk transparency obligations apply |
| 2 August 2027 | High-risk obligations for AI systems that are safety components of products (Annex I) |
Fines
The AI Act uses a tiered penalty structure:
| Violation | Maximum Fine |
|---|---|
| Prohibited AI practices | EUR 35 million or 7% of global annual turnover (whichever is higher) |
| High-risk system non-compliance | EUR 15 million or 3% of global annual turnover |
| Incorrect information to authorities | EUR 7.5 million or 1.5% of global annual turnover |
For SMEs and startups, fines are capped at the lower of the percentage or fixed amount. But even reduced fines are substantial for a small company.
How to start: a practical checklist
-
AI inventory: Catalog every AI system your organization develops, deploys, or uses. Include third-party AI tools (yes, that ChatGPT Enterprise subscription counts as a deployment).
-
Risk classification: For each AI system, determine its risk tier. Most business AI falls into minimal or limited risk. Focus your effort on any systems that touch high-risk categories (HR, credit, insurance, critical infrastructure, education).
-
Prohibited practices check: Verify immediately that none of your AI systems fall into the prohibited category. This is already enforceable.
-
Gap analysis for high-risk systems: If you have high-risk AI systems, map your current practices against the ten provider obligations listed above. Identify gaps in documentation, testing, monitoring, and human oversight.
-
GPAI assessment: If you develop or fine-tune foundation models, determine whether your models qualify as GPAI and whether they have systemic risk.
-
Governance structure: Assign AI Act compliance responsibility. This cannot sit solely with legal or solely with engineering - it requires cross-functional coordination.
-
AI literacy: Article 4 requires that staff involved in AI operation and deployment have sufficient AI literacy. This is already enforceable. Training programs should be in place.
AuditFront and the EU AI Act
AuditFront is developing a structured EU AI Act compliance assessment module, planned for Q2 2026. It will help organizations:
- Classify their AI systems by risk tier
- Assess compliance against high-risk obligations
- Identify gaps in documentation, testing, and governance
- Track remediation progress
This follows the same assessment-first methodology we use for ISO 27001, GDPR, NIS2, and SOC 2 - practical gap analysis before you spend on tools or consultants.
Create a free account to be notified when the EU AI Act module launches.
Organizations already using AuditFront for ISO 27001 or GDPR will find significant overlap. The AI Act’s requirements for risk management, documentation, and data governance build on principles already embedded in these frameworks. Work done on one framework reduces effort on the others.