If you develop, build, or place high-risk AI systems on the EU market under your own name, you are a provider under the EU AI Act. And before your system can be sold or put into service in the EU, you must complete a conformity assessment.

Conformity assessment is not a form to fill in. It is a structured process of demonstrating that your AI system meets the Act's requirements across risk management, data governance, technical documentation, transparency, human oversight, and accuracy, and that you have the governance architecture to maintain compliance after deployment. Prior to placing a system on the market, providers must carry out the applicable conformity assessment, draw up a declaration of conformity, affix CE marking, and register in the EU database.

This guide walks through each step in order: who it applies to, which assessment route you must follow, what the process involves, and what happens after. For the broader obligations picture, including the Article 4 training requirement that applies regardless of risk tier, see what the EU AI Act means for your team's training in 2026.

Section 01

Are You a Provider? Establishing Scope

This is the question most organisations get wrong, and getting it wrong is expensive. The Act draws a sharp distinction between providers and deployers. The obligations are not comparable.

A provider is the entity that develops an AI system and places it on the EU market or puts it into service under their own name or brand. That includes software companies building AI-powered products, technology vendors integrating AI into enterprise solutions, and organisations that have substantially modified a third-party AI system and are now distributing it. If your company built it, branded it, and is offering it to others, you are almost certainly a provider.

A deployer uses an AI system in a professional context rather than building one. Deployers have their own obligations, but conformity assessment is a provider responsibility. If you are genuinely only a deployer, this guide still matters to you: the conformity assessment your provider completed, or failed to complete, is part of what you are inheriting when you put that system to work.

Two situations in particular deserve attention. First, providers without establishment in the EU must appoint an EU authorised representative if their system's output is used within the EU. Market access depends on it; this is not a procedural nicety. Second, if an AI system is significantly modified after initial deployment, it must undergo a new conformity assessment procedure, regardless of whether the modified system is being redistributed or continues to be used by the current deployer. Planned changes to learning behaviour that are documented at initial assessment do not trigger re-assessment. Unplanned significant changes do.

The Practical Test

Ask yourself: if a regulator came in tomorrow, whose name is on the system? Whose documentation covers it? Who drew up the technical specification? That is your provider. If the answer is you, read on. If the answer is a vendor you purchased from, the question becomes: have you seen their conformity assessment?

Section 02

Step One: Classify Your System Correctly

Before any conformity assessment work begins, you need to establish whether your system is actually high-risk. This sounds obvious. In practice, it is one of the most common compliance failures, and it tends to go in one direction: organisations assume their system is lower-risk than it is.

50%+
Secure Privacy — EU AI Act 2026 Compliance Analysis
of organisations lack systematic inventories of AI systems currently in production. You cannot classify what you have not documented.
2
EU AI Act — Annexes I and III
separate annexes determine high-risk classification: one based on regulated product categories, one based on high-stakes deployment contexts. Your system may qualify under either, or both.

Annex I covers AI systems that are safety components of products already regulated under EU harmonisation legislation: medical devices, machinery, toys, pressure equipment, and others. If your AI is embedded in one of these product categories, Annex I applies regardless of what the AI specifically does.

Annex III covers AI systems used in specific high-stakes deployment contexts regardless of the product they sit within: biometric identification, critical infrastructure, education, employment, essential services including credit scoring and insurance, law enforcement, migration, and the administration of justice. An AI recruitment tool sitting inside a standard HR software platform is still Annex III. The wrapper does not change the classification.

The classification decision matters enormously because it determines which conformity assessment route you must follow. Getting it wrong, classifying a high-risk system as minimal risk, exposes you to the full penalty structure with no compliance documentation to support a defence. Understanding what the high-risk requirements actually demand before you begin classification work reduces the chance of misreading where your system sits.

Section 03

Step Two: Choose Your Conformity Assessment Route

Here is something that surprises most people when they first look at this: the majority of providers do not need a notified body. The Act provides two conformity assessment routes under Article 43, and which route you follow is determined by your system's classification, not by your preference or the technical complexity of the system.

Route A — Most providers
Internal control (Annex VI)
Applies to points 2–8 of Annex III

Credit scoring, recruitment screening, education systems, law enforcement tools, insurance, social benefits, and administration of justice.

No notified body required. You assess your own system against the Act's requirements, document everything, declare conformity, affix CE marking, and register. The assessment must be rigorous and defensible to a regulator, but you conduct it.

Route B — Specific categories
Third-party assessment (Annex VII)
Applies to Annex I + remote biometrics

Annex I systems where existing product legislation requires third-party assessment, and AI systems intended for remote biometric identification.

Upon completion, the notified body issues an EU Technical Documentation Assessment Certificate valid for four years, renewable for up to four further years.

Where a provider cannot apply harmonised standards entirely, or where those standards do not yet exist, they must follow the third-party conformity assessment procedure in Annex VII. This matters in 2026 because many harmonised standards are still in draft. If you cannot demonstrate compliance against a finalised standard, your route options narrow.

For SMEs subject to third-party assessment, the Act mandates that fees be set proportional to size and market share. If that is you: document your size classification clearly and engage notified bodies early. Capacity is limited and timelines are tightening.

A Common Misconception

Many providers assume that because their system is technically sophisticated or genuinely novel, they need a notified body. That is not how the Act works. Route is determined by classification category, not complexity. A highly sophisticated credit-scoring model uses internal self-assessment under Route A. A relatively simple biometric identification component triggers Route B. Classification drives process, not the other way around.

Section 04

Step Three: Build the Technical Documentation

Technical documentation is the evidentiary backbone of conformity assessment. It must exist before the assessment begins, it must cover specific content prescribed by the Act, and it must be maintained and updated throughout the system's lifecycle. Think of it the way you think about GDPR: the audit trail is the compliance. Regulators assessing your conformity position will look at the documentation first, and if it is not there, nothing else you say will matter much.

One number worth internalising before you start: technical documentation must be retained for 10 years after an AI system is placed on the market. This is not a rolling archive. It is a ten-year obligation that starts from market placement and extends well beyond any single product cycle. Build your documentation architecture with that in mind from day one.

Providers must ensure compliance with Articles 8 to 15 throughout the system's lifecycle, including a documented risk management system, robust data governance, detailed technical documentation, automatic logging, human oversight protocols, and safeguards for accuracy, robustness, and cybersecurity. The documentation must cover each of the following:

Article 9
Risk management system
Evidence of a continuous, lifecycle-spanning risk management process with documented outputs. Not a point-in-time assessment: an ongoing programme. The question a regulator asks is not whether you ran a risk assessment at launch. It is whether you have been running one since.
Article 10
Data governance
Documentation of training, validation, and test datasets. Evidence that data is relevant, representative, and subject to governance practices that address bias and accuracy risks. If your training data reflects historical human decisions, that is where the discrimination risk lives, and where Article 10 expects you to have looked. Understanding how bias enters AI systems is the foundation for documenting how you have addressed it.
Article 11 / Annex IV
System design and performance
A comprehensive description of the system's purpose, design, architecture, logic, and performance characteristics. Must include known limitations and circumstances in which the system may fail or produce unreliable outputs. The limitations section is consistently underdeveloped in practice. It should not be; it is often the first thing a regulator reads.
Article 12
Logging capabilities
Evidence that the system automatically logs inputs, outputs, and decision points sufficient to trace what happened after deployment. This is the evidentiary foundation for any post-incident investigation. If you cannot reconstruct what your system did and why, you cannot investigate, and you cannot defend yourself when something goes wrong.
Article 13
Transparency for deployers
Clear documentation of the system's intended use, capabilities, limitations, and circumstances requiring human oversight, written for deployers, not engineers. If the person responsible for deploying your system cannot understand what it does and when to override it, your Article 13 documentation has not done its job.
Section 05

Step Four: Conduct the Assessment

With documentation in place, the assessment itself proceeds differently depending on your route. The underlying standard is the same in both cases: can you demonstrate that your system meets the Act's requirements? The difference is who does the demonstrating, and to whom.

A
Internal control (Annex VI)
The provider conducts a structured self-assessment against the requirements in Chapter III, Section 2 of the Act. This involves verifying that the QMS complies with Article 17 and examining technical documentation for compliance with the relevant essential requirements. The provider also verifies that the design and development process and post-market monitoring are consistent with the technical documentation. No external body is involved, but the assessment must be documented and defensible to a regulator. Think of it as an audit you conduct on yourself, with the expectation that a regulator could replicate your findings.
B
Third-party assessment (Annex VII)
The notified body assesses both the quality management system and the technical documentation. Upon successful completion, the notified body issues an EU Technical Documentation Assessment Certificate valid for four years, renewable for up to four further years. If the notified body finds non-conformity, the provider must take corrective action within the deadline set, or withdraw the system from the market. Providers can appeal against a notified body's determination under Article 45. That right is worth knowing before you are in a position where you need it.

In both routes, the assessment outputs must meet what the Act calls evidentiary quality: documentation, reproducibility, and traceability — an unbroken chain from requirement to test to evidence. The question is not "did we evaluate?" It is "can we prove we evaluated, and can a regulator reproduce our findings?" Those are meaningfully different questions, and the second one is the one that matters in enforcement.

Section 06

Step Five: Declaration, CE Marking, and Registration

Once the conformity assessment is complete, three actions follow in sequence. All three must happen before the system can be placed on the EU market, and none of them are optional or purely administrative.

1
EU Declaration of Conformity (Article 47)
A written declaration that the system meets the requirements of the Act, describing the conformity assessment procedure followed. Must be kept available to authorities for ten years. This document is your primary compliance statement. If a regulator asks for evidence that you completed the process, this is what you hand them first.
2
CE Marking (Article 48)
The CE marking must be clearly visible and permanent, or affixed to the packaging or documentation if physical affixation is not possible. For AI systems provided digitally, a digital CE marking must be easily accessible. Where a notified body was involved, the marking must include their identification number. Getting the CE marking wrong, or missing it entirely, is not a minor procedural issue. It is a visible, auditable compliance failure.
3
EU Database Registration (Article 49)
By 2 August 2026, conformity assessments should be completed, CE marking affixed, and EU database registration done. Registration is a precondition for market access, not an afterthought. Build it into your project timeline from the start, not as a final step after everything else is done.
Section 07

Ongoing Obligations: What Happens After Deployment

This is where a lot of providers mentally clock out, and it is a mistake. Conformity assessment is not a one-time event. The Act imposes ongoing obligations that continue for the lifetime of the system on the market, and failing to meet them after deployment is as much a compliance failure as not completing the assessment in the first place.

1
Post-market monitoring (Article 72)
Providers must implement a system that actively tracks real-world performance, identifies failures, and feeds findings back into the risk management process. This is not passive. It requires systematic data collection and documented review processes. If your AI system is behaving differently in production than it did in testing (and most do, eventually), Article 72 is what catches it before it becomes a regulatory problem.
2
Serious incident reporting
Providers must immediately inform the competent authority of any serious incident involving their AI system. Early self-reporting is a meaningful mitigating factor in enforcement. Discovering a problem and choosing not to report it is worse, legally and reputationally, than the problem itself.
3
Substantial modification triggers re-assessment
High-risk AI systems must undergo a new conformity assessment in the event of a substantial modification. Planned changes to learning behaviour documented at initial assessment do not trigger re-assessment. Unplanned significant changes do. The practical implication: document your planned evolution at the point of initial assessment, not retrospectively.

The ongoing obligations are where the connection to the high-risk AI governance requirements becomes most visible in practice. Post-market monitoring, serious incident reporting, and re-assessment on modification are not separate from your QMS. They are what your QMS exists to support.

Section 08

The Harmonised Standards Question

Here is the honest position for 2026: the harmonised standards the Act relies on are not finalised. Significant effort will be required to have standards ready for use by August 2026, when most of the Act's provisions come into effect, and many stakeholders are closely watching whether that timeline holds.

The draft QMS standard, prEN 18286, entered public enquiry in October 2025 and is expected to be finalised by end of 2026. Once cited in the Official Journal of the EU, conformity with it grants a presumption of conformity with Article 17. That is a meaningful legal benefit; it is not available yet, and waiting for it is not a compliance strategy.

In the absence of harmonised standards, providers can use common specifications issued by the European Commission, or demonstrate compliance through alternative means. Either way, you must be prepared to justify your approach to a national market surveillance authority. The practical advice from compliance specialists is consistent: do not wait for harmonised standards before beginning conformity assessment work. Start with the Act's requirements directly. The standards, when they arrive, will confirm your approach or prompt refinement. They will not excuse having done nothing in the meantime.

The Timing Reality

August 2026 is the enforcement start date, not the work start date. Organisations that begin classification, documentation, and assessment work now, against the Act's requirements directly, will be in a substantially stronger position than those waiting for standards to land. The standards are a shortcut to demonstrating conformity. They are not a prerequisite for achieving it.

Section 09

Compliance Checklist for Providers

For compliance leads and product teams tracking readiness against the full conformity assessment process.

EU AI Act Conformity Assessment — Provider Readiness Checklist
We have classified all AI systems against Annex I and Annex III categories, including embedded components and substantially modified third-party systems.
We have determined which conformity assessment route applies to each system and can document the basis for that determination.
We have appointed an EU authorised representative if we are based outside the EU and our systems are used within the EU.
Technical documentation covering Articles 9 to 15 is complete and up to date, including a limitations section that accurately reflects when the system may fail or produce unreliable outputs.
Our QMS meets the requirements of Article 17 and is maintained as an ongoing operational discipline, not filed and forgotten.
We have completed the conformity assessment and drawn up the EU Declaration of Conformity, kept available to authorities for ten years.
CE marking has been affixed and is visible, permanent, and includes the notified body's identification number where applicable.
The system is registered in the EU database before being placed on the market or put into service.
Post-market monitoring is operational and documented, with a defined process for feeding findings back into the risk management system.
We have a clear process for identifying and reporting serious incidents to the competent authority without delay.
Frequently Asked Questions
EU AI Act Conformity Assessment — Common Questions
Answers to the questions compliance leads and product teams most commonly ask about the provider conformity assessment process.
Who counts as a provider under the EU AI Act?
A provider is any entity that develops an AI system and places it on the EU market or puts it into service under their own name or brand. This includes software companies building AI-powered products, technology vendors integrating AI into enterprise solutions, and organisations that have substantially modified a third-party AI system and are distributing it. Providers without establishment in the EU must appoint an EU authorised representative if their system's output is used within the EU. Market access depends on it.
What are the two conformity assessment routes under the EU AI Act?
The Act provides two routes under Article 43. Route A is internal control under Annex VI, covering the majority of Annex III high-risk systems. No notified body is required; the provider self-assesses, documents, declares conformity, and registers. Route B requires a third-party notified body and applies to Annex I systems where existing product legislation requires third-party assessment, and to AI systems intended for remote biometric identification. Most providers will follow Route A. Classification drives the route, not the complexity of the system.
What must technical documentation include for EU AI Act conformity assessment?
Technical documentation must cover: a continuous risk management system (Article 9); documented data governance for training, validation, and test data (Article 10); a comprehensive system design and performance description including known limitations (Article 11 and Annex IV); evidence of automatic logging (Article 12); and clear transparency information for deployers (Article 13). Documentation must be retained for 10 years after the system is placed on the market.
What happens after a conformity assessment is complete?
Three actions must follow in sequence before market placement: draw up an EU Declaration of Conformity (Article 47), affix the CE marking (Article 48), and register in the EU database (Article 49). After deployment, providers must maintain post-market monitoring, report serious incidents to the competent authority without delay, take corrective action if the system falls out of compliance, and conduct a new conformity assessment if the system is substantially modified. Conformity assessment is not a one-time event.
Do I need a notified body for EU AI Act conformity assessment?
Most providers do not. The majority of high-risk AI systems under Annex III use the internal control route under Annex VI, requiring no notified body. A notified body is required for Annex I systems where existing product legislation mandates third-party assessment, and for AI systems intended for remote biometric identification. For SMEs subject to third-party assessment, the Act mandates fees proportional to size and market share. Document your size classification and engage notified bodies early.
The documentation conformity assessment demands
is also the foundation of trustworthy AI.

Savia's GRC learning paths help the people inside your organisation: compliance leads, product teams, and business functions who need to understand what these obligations mean in practice and build the capability to meet them.