Key Takeaway
Starting with a thorough risk classification of all AI systems in your portfolio lets you focus engineering resources on high-risk systems that require the most compliance work. This roadmap breaks down the EU AI Act into actionable engineering tasks across three phases: Assessment, Implementation, and Certification, with specific timelines tied to the Act's enforcement milestones.
Prerequisites
- An inventory of all AI systems deployed or planned across the organization
- Understanding of which AI systems are placed on the EU market or affect EU residents
- Access to legal counsel with EU AI Act expertise
- Familiarity with your organization's existing quality management and risk management systems
- Executive sponsorship and budget allocation for compliance activities
- Completed or in-progress AI governance framework (see: AI Governance Framework)
Enforcement Timeline
The EU AI Act entered into force on August 1, 2024, with a staggered enforcement timeline. Understanding this timeline is critical for prioritizing compliance work. Organizations that wait until obligations become enforceable will not have enough time to implement the required controls, documentation, and organizational changes.
- 1
February 2, 2025: Prohibited Practices
Prohibitions on unacceptable-risk AI systems take effect. This includes social scoring, real-time remote biometric identification in public spaces (with limited exceptions), exploitation of vulnerabilities, and subliminal manipulation. Audit your portfolio immediately for any systems that fall into these categories.
- 2
August 2, 2025: GPAI Model Obligations
Obligations for general-purpose AI model providers take effect. If your organization provides foundation models or general-purpose AI systems to others, you must comply with transparency requirements, copyright compliance, and (for systemic risk models) adversarial testing and incident reporting.
- 3
August 2, 2026: High-Risk System Obligations
The core of the Act takes effect. High-risk AI systems must comply with requirements for risk management, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy, robustness, and cybersecurity. This is the most resource-intensive compliance milestone.
- 4
August 2, 2027: Extended Deadlines
Extended compliance deadline for high-risk AI systems that are components of larger regulated products (e.g., medical devices, automotive, aviation). These systems may need to comply with both the AI Act and sector-specific regulations.
Risk Classification System
Unlock the full Knowledge Base
This article continues for 14 more sections. Upgrade to Pro for full access to all 93 articles.
That's just $0.11 per article
- Full access to all blueprints, frameworks, and playbooks
- Interactive checklists with progress tracking
- Downloadable templates (.xlsx, .pptx, .docx)
- Quarterly Technology Radar updates