The world's first comprehensive AI regulation takes full effect in months, not years. We help regulated organizations classify their AI systems, close compliance gaps, and build the governance infrastructure the EU AI Act demands — before the enforcement deadline arrives.
Understanding the Regulation
The EU Artificial Intelligence Act (Regulation (EU) 2024/1689) is the world's first comprehensive legal framework for artificial intelligence. Adopted by the European Parliament in June 2024 and published in the Official Journal of the European Union in August 2024, this regulation establishes binding rules for the development, deployment, and use of AI systems across the European Union.
The EU AI Act takes a risk-based regulatory approach, classifying AI systems into tiers based on the level of risk they pose to health, safety, and fundamental rights. Systems deemed to pose unacceptable risk are prohibited outright. High-risk systems face stringent requirements for risk management, data governance, technical documentation, transparency, human oversight, and accuracy. Limited-risk systems have transparency obligations, and minimal-risk systems face no specific requirements.
The regulation applies broadly to providers (organizations that develop or place AI on the market), deployers (organizations that use AI systems under their authority), importers, and distributors. Critically, the Act has extraterritorial scope — it applies to any organization placing AI systems on the EU market or whose AI system's output is used within the EU, regardless of where the organization is headquartered. A U.S. company selling AI-enabled software to European customers is subject to the regulation.
Penalties for non-compliance are severe: up to €35 million or 7% of total worldwide annual turnover for prohibited AI practices, up to €15 million or 3% for non-compliance with high-risk AI requirements, and up to €7.5 million or 1% for supplying incorrect information to authorities. These are per-violation penalties, and they are designed to be proportionate but dissuasive.
Enforcement Timeline
The EU AI Act uses a phased enforcement approach. Some provisions are already in effect. The most significant requirements for high-risk AI systems take effect August 2, 2026.
Prohibited AI practices provisions apply. Social scoring, manipulative AI, and certain biometric identification systems are now banned.
General-Purpose AI (GPAI) model rules apply. Providers of GPAI models must comply with transparency, documentation, and copyright requirements. Systemic risk GPAI models face additional obligations.
High-risk AI system requirements apply. This is the major compliance deadline. Providers and deployers of high-risk AI systems must demonstrate conformity with all Article 8–15 requirements, including risk management, data governance, technical documentation, human oversight, and quality management systems.
Annex I high-risk AI systems — those embedded in products covered by EU safety legislation (medical devices, machinery, aviation, etc.) — must comply with all requirements.
Risk Classification
The EU AI Act classifies AI systems into four risk tiers. Your compliance obligations depend entirely on where your systems fall in this classification. Getting this right is the critical first step.
AI systems that pose an unacceptable risk to people's safety, livelihoods, and rights are prohibited outright under Article 5. These include:
AI systems that significantly affect people's safety or fundamental rights. These systems face the full weight of EU AI Act requirements, including conformity assessment and CE marking:
AI systems with specific transparency obligations. Users must be informed they are interacting with AI:
The vast majority of AI systems fall into this category and face no specific EU AI Act requirements. Voluntary codes of conduct are encouraged:
High-Risk Requirements
Articles 8 through 15 of the EU AI Act establish the mandatory requirements for high-risk AI systems. These are the core compliance obligations that providers must satisfy before placing a system on the EU market.
A continuous, iterative risk management process throughout the AI system's lifecycle. Must identify and analyze known and foreseeable risks, estimate and evaluate residual risks, and adopt risk mitigation measures.
Training, validation, and testing data must meet quality criteria. Requires data governance practices covering design choices, data collection, preparation, labeling, relevance, representativeness, and bias examination.
Comprehensive technical documentation demonstrating compliance with all requirements. Must be prepared before the system is placed on the market and kept up to date throughout its lifecycle.
Automatic logging of events throughout the system's lifetime. Logs must enable traceability of the AI system's functioning and must be retained for an appropriate period.
Clear instructions of use for deployers. Must include information about the provider, system characteristics, performance, limitations, human oversight measures, and expected lifetime.
Systems must be designed for effective human oversight. Natural persons assigned to oversight must be able to properly understand, monitor, and intervene in the system's operation.
Systems must achieve appropriate levels of accuracy, robustness, and cybersecurity. Must be resilient against errors, faults, inconsistencies, and attempts at manipulation by unauthorized parties.
Providers must put in place a quality management system covering compliance strategy, design and development processes, testing, data management, risk management, and post-market monitoring.
Formal process demonstrating compliance before market placement. Can be self-assessment or third-party notified body assessment depending on the system type. Results in EU Declaration of Conformity and CE marking.
Our Services
EU AI Act compliance is not a single project — it is an organizational capability. We help you build the systems, processes, and documentation to achieve and maintain compliance over time.
Comprehensive catalog of every AI system your organization develops, deploys, or procures. Each system classified by EU AI Act risk tier with documented justification.
Detailed analysis of each AI system against EU AI Act Annex III categories. Determination of unacceptable, high-risk, limited, or minimal classification with supporting evidence.
Article-by-article gap assessment for each high-risk AI system against Articles 8–15 and Article 17 requirements. Prioritized remediation roadmap with timeline and resource estimates.
Development of Article 11 technical documentation packages for each high-risk AI system, covering system architecture, data practices, performance metrics, risk mitigation, and testing results.
Alignment of your existing quality management system with Article 17 requirements. Leveraging ISO 9001, ISO 13485, or ISO 42001 infrastructure to satisfy EU AI Act QMS obligations.
Preparation for conformity assessment procedures, whether self-assessment or notified body assessment. Includes EU Declaration of Conformity drafting and CE marking guidance.
Industry Impacts
The EU AI Act affects different industries in different ways. Understanding your industry-specific exposure is essential for scoping your compliance program.
AI-enabled medical devices face dual regulation under both the EU AI Act and the Medical Device Regulation (MDR). Software classified as a medical device (SaMD) that uses AI is automatically high-risk under the AI Act.
Financial services AI systems used for creditworthiness assessment and credit scoring are explicitly listed as high-risk in Annex III. Insurance pricing, fraud detection, and algorithmic trading also face scrutiny.
AI systems used in employment contexts are high-risk under the EU AI Act. This covers the entire employment lifecycle from recruitment to termination decisions.
AI systems embedded in safety components of machinery, equipment, or industrial processes fall under Annex I and may require notified body assessment. Quality control AI and predictive maintenance also warrant evaluation.
Complementary Frameworks
The EU AI Act tells you what you must do. ISO 42001 gives you the management system to do it consistently and sustainably. Organizations that implement ISO 42001 alongside EU AI Act compliance build governance infrastructure that goes beyond checking boxes — they build organizational capability that endures.
Learn about ISO 42001 implementationISO 42001 Clause 8 risk assessment maps directly to EU AI Act Article 9 risk management system requirements.
ISO 42001 management system satisfies EU AI Act Article 17 quality management system requirements.
ISO 42001 document control and internal audit processes support EU AI Act Articles 11-12 technical documentation and logging.
ISO 42001 is expected to become a harmonized standard under the EU AI Act, creating a presumption of conformity.
FAQ
The August 2, 2026 deadline is not negotiable. Organizations deploying high-risk AI systems need time to inventory, classify, document, and remediate. Start with a free 30-minute consultation to assess your exposure and build a realistic compliance timeline.
Or email support@certify.consulting