Most conversations about the EU AI Act focus on the technology — prohibited practices, high-risk systems, conformity assessments. What receives far less attention is the obligation that applies to almost every organisation right now, regardless of sector, size, or how sophisticated their AI use actually is.

A 2026 readiness analysis by Vision Compliance found 78% of enterprises are unprepared for their EU AI Act obligations — and the most commonly missed obligation is not a technical one. It is Article 4: the legal requirement to ensure that all staff working with AI systems have a sufficient level of AI literacy. Article 4 has been in force since 2 February 2025. The majority of companies in Europe do not even know it exists.

This article explains what Article 4 requires, what the penalty structure looks like, who it applies to, and what your organisation needs to do before national enforcement begins in August 2026.

Section 01

The EU AI Act — Timeline and Scope

The EU AI Act (Regulation EU 2024/1689) is the world's first comprehensive legal framework for artificial intelligence. It entered into force on 1 August 2024 and applies a risk-based approach: the higher the risk an AI system poses, the stricter the obligations on the organisations deploying it.

Implementation is staggered across four phases. The Article 4 literacy obligation sits at the earliest point — it is not a future requirement. It is already law.

2 Feb 2025
Article 4 AI literacy obligation in force
Prohibited AI practices also came into effect. All providers and deployers of AI systems became legally required to ensure sufficient AI literacy among their staff.
Already in force
2 Aug 2025
General-purpose AI model obligations
Requirements for providers of general-purpose AI models, including transparency and capability evaluations, became applicable.
Already in force
2 Aug 2026
National enforcement of Article 4 begins
Most remaining obligations become enforceable, including transparency duties and main high-risk system requirements. National authorities begin supervising and enforcing the AI literacy obligation.
Enforcement begins
Dec 2027
High-risk AI in regulated products
Rules for high-risk AI embedded in regulated products (medical devices, machinery, vehicles). The EU Council agreed in March 2026 to delay this category from August 2026.
Delayed from Aug 2026

The Act applies to any organisation operating in or serving the European market — not just EU-headquartered companies. McKinsey reports that 88% of organisations already use AI in at least one business function — meaning Article 4's reach is practically global for any organisation with EU customers, employees, or operations.

Section 02

What Article 4 Actually Requires

Article 4 — Official Text
"Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in."

Three things define the practical scope of this obligation.

Who it applies to. Article 4 affects any organisation that uses AI systems regardless of size or sector — law firms using AI-powered document review, hospitals with diagnostic support systems, HR departments filtering CVs with algorithms, marketing teams generating content with generative AI. If anyone in your company uses ChatGPT, Copilot, or Gemini, Article 4 applies. The average number of undocumented AI tools found per company in compliance audits is between 5 and 12 — most installed by employees without IT or management awareness.

What "sufficient" means. The European Commission has clarified that AI literacy means skills, knowledge, and understanding that allow providers, deployers, and affected persons to make an informed deployment of AI systems and gain awareness about the opportunities and risks of AI and possible harm it can cause. It is not about turning everyone into a machine learning engineer. It is about ensuring people who work with AI understand it well enough to use it responsibly.

Training must be proportional to role. Generic awareness training for all employees is unlikely to be sufficient on its own. The regulation explicitly requires training measures to be adapted to each person's level, experience, and the context of the AI systems they use. A customer service agent using an AI response tool needs different training from a compliance officer overseeing an AI-assisted risk assessment. For a breakdown of what different roles need, our guide to what AI training employees need maps this by function.

Section 03

The Penalty Structure — What Non-Compliance Actually Costs

The EU AI Act establishes three tiers of fines under Article 99, calibrated to the severity of the violation.

Tier 1
€35M
or 7% of global annual turnover
Prohibited AI practices — whichever figure is higher
Tier 2
€15M
or 3% of global annual turnover
High-risk AI system violations including risk management, data governance, transparency
Tier 3
€7.5M
or 1% of global annual turnover
Providing misleading information to authorities

To put these figures in context: 7% of global revenue would cost Meta approximately $8.5 billion, Google $14 billion, and Microsoft $16 billion based on 2024 financials. Even at the 1% tier, for a company with €50 million in revenue, that is €500,000.

⚠ How Article 4 Fits Into the Penalty Picture

No direct fine applies for violating Article 4 alone. However, from August 2025 organisations may face civil liability if the use of AI systems by inadequately trained staff causes harm to consumers, business partners, or other third parties.

More significantly, Article 4 breaches will be taken into account by regulators when considering penalties for other violations. Inadequate training will not trigger a standalone fine — but it makes every other violation more expensive and substantially weakens any compliance defence. Beyond fines, the Act allows employees, rejected job candidates, or consumer associations to file complaints with the national authority. A regulatory investigation is reputational risk as much as financial risk.

Section 04

What the High-Risk Delay Means — and What It Doesn't

The March 2026 announcement that the EU Council agreed to delay certain high-risk AI system requirements was widely reported as providing businesses with relief. The nuance is important.

The delay responds to two problems: the European Commission missed its February 2026 deadline to publish technical guidance on high-risk AI classification, and only 8 of the 27 EU member states had designated their national contact points. In that context, applying full high-risk obligations without a functioning supervisory infrastructure would have created compliance requirements with no clear mechanism for demonstrating conformity.

The Article 4 literacy obligation is unaffected by this delay. It was in force from February 2025. August 2026 is when national authorities begin enforcing it — which means the window to demonstrate good-faith compliance is closing, not extending.

Good Faith Matters

Demonstrating that you have been working toward compliance — even if not yet fully compliant — is a significant mitigating factor in penalty calculations. Documented good-faith effort matters. Inaction does not. The organisations that will face the most exposure are not those that tried and fell short. They are those that never started.

Section 05

The Scale of the Compliance Gap

The readiness data makes for uncomfortable reading across every dimension of Article 4 compliance.

78%
Vision Compliance — 2026 EU AI Act Readiness Report
of enterprises across eight industries are unprepared for their EU AI Act obligations — making Article 4 the most commonly missed requirement.
50%+
Secure Privacy — EU AI Act 2026 Compliance Analysis
of organisations lack systematic inventories of AI systems currently in production or development. Without knowing what AI exists, risk classification and compliance planning is impossible.
12%
Pew Research via HR Dive
of workers have received training specifically on AI — even though half underwent some form of general training in the past year. Awareness and AI-specific literacy are not the same thing.
1 in 3
BCG — AI at Work: Momentum Builds But Gaps Remain, 2025
employees say they have been properly trained in AI — meaning two thirds have not, across organisations that have already adopted AI tools.

These figures reflect both the scale of what Article 4 is asking organisations to address and how far most are from meeting that obligation in any way that would withstand regulatory scrutiny. Only 32% of employees have received formal AI training at work — which means that for most organisations, the compliance gap is not a gap at the margins. It is the baseline condition.

Section 06

What This Means Practically for Your Organisation

Translating Article 4 into what an L&D, HR, or compliance lead actually needs to do before August 2026 — in sequence.

1
Audit what AI your organisation is actually using
Before designing any training, you need to know which tools employees are interacting with — including the unsanctioned ones. Shadow AI is a compliance gap as much as a security one. Build an inventory covering approved tools, embedded AI features in existing software (Microsoft 365 Copilot, Google Workspace AI), and tools employees have adopted independently. The average compliance audit finds between 5 and 12 undocumented AI tools per company. You cannot classify risk or demonstrate compliance for tools you do not know exist.
2
Design training that is proportional to role and risk
The regulation does not require identical training for everyone. A baseline module covering all employees — what AI is, what data cannot enter public AI tools, when to seek human oversight — establishes the foundation. Employees in higher-risk functions need a second layer: HR teams using AI in recruitment, finance teams using AI in credit or forecasting, customer service teams using AI-assisted responses. Only 32% of employees have received formal AI training at work — role-specific design significantly increases both the quality and the defensibility of what you deliver. See how to build an AI literacy programme for your team in 2026 for the full design framework.
3
Document everything
Think of compliance documentation the way you think about GDPR: the audit trail is the compliance. Training records, role-specific programme evidence, and documentation of who received what training and when are your primary defence in any regulatory investigation. Without them, good-faith compliance cannot be demonstrated — even if the training actually happened. For the governance layer that training sits within, documenting and communicating your AI safeguards covers the documentation and accountability structures that connect to Article 4.
4
Build a review mechanism, not a one-time event
AI tools evolve faster than any fixed curriculum. Training designed as a one-time rollout will be outdated before enforcement begins. Build a process for updating content as tool usage changes and as regulatory guidance develops through the second half of 2026. The EU AI Act's obligations will continue to evolve — your training programme needs to evolve with them. Agile principles for effective AI training covers how to design for iteration rather than completion.
Section 07

The Broader Context — What Comes Next

Article 4 is the floor, not the ceiling. As August 2026 arrives with transparency obligations and high-risk system requirements — even with some categories delayed to December 2027 — organisations that have not covered the literacy baseline will face a harder compliance problem for everything that follows.

Article 14 and Article 4 are structurally linked. Article 14 requires that people using high-risk AI systems have the skills, knowledge, and authority to understand, monitor, and override those systems. You cannot demonstrate meaningful human oversight with an undertrained workforce — failing one obligation makes demonstrating the other harder.

For organisations outside the EU, the Act's extraterritorial reach means this is not a purely European concern. EY's global survey found that the majority of C-suite leaders consider non-compliance with AI regulations to be the most common AI risk they face — and with the UK, US, and other jurisdictions developing their own AI governance frameworks, investment in AI literacy training is increasingly a global baseline rather than a regional compliance exercise. For the full regulatory landscape, the AI risks and regulations every leader must know covers what the current framework means for organisations of different types and sizes. For the operational risks that Article 4-quality training directly addresses, 5 forms of AI bias hiding in your daily workflow illustrates what happens when employees lack the judgment to catch what AI gets wrong.

The Core Point

The August 2026 enforcement date is not the deadline for starting. It is the deadline for being able to demonstrate that you started in good faith. The organisations that will face the most exposure are not those that tried and fell short — they are those that never started.

Section 08

Quick Reference Checklist

For compliance, HR, and L&D leads assessing current Article 4 readiness.

EU AI Act Article 4 — Compliance Checklist
We have audited which AI tools employees are using, including unsanctioned tools not approved by IT or management.
We have a baseline AI literacy programme covering all staff — not just a one-off awareness session, but documented training with records of completion.
We have role-specific training for employees in higher-risk functions — HR, finance, customer service, legal, and any other team using AI in consequential decisions.
We have training records documenting who received what and when, in a format that would withstand regulatory scrutiny.
We understand which of our AI uses may qualify as high-risk under Annex III of the EU AI Act — even with some categories delayed to December 2027.
We have a process for updating training as our AI tool usage changes and as regulatory guidance develops through 2026.
We have leadership oversight of AI governance, not just operational compliance — with a named person accountable for Article 4 readiness.
Frequently Asked Questions
EU AI Act Training Requirements — Common Questions
Answers to the questions compliance, HR, and L&D leads most commonly ask about Article 4 and what it requires in practice.
What does the EU AI Act require for employee training?
Article 4 of the EU AI Act requires all providers and deployers of AI systems to ensure a sufficient level of AI literacy for their staff and anyone dealing with AI systems on their behalf. The obligation has been in force since 2 February 2025. Training must be proportional to each employee's role, experience, and the AI systems they use — generic awareness training alone is unlikely to be sufficient.
When does EU AI Act enforcement start?
Article 4's AI literacy obligation came into force on 2 February 2025. National enforcement authorities begin supervising and enforcing Article 4 from August 2026. Some high-risk AI system requirements have been delayed to December 2027, but the Article 4 training obligation is unaffected by that delay.
What are the EU AI Act fines for non-compliance?
The EU AI Act establishes three tiers: up to €35 million or 7% of global annual turnover for prohibited AI practices, up to €15 million or 3% for high-risk system violations, and up to €7.5 million or 1% for misleading information. No direct fine applies for violating Article 4 alone, but Article 4 breaches are taken into account when calculating penalties for other violations — and inadequate training weakens every compliance defence.
Does the EU AI Act apply to companies outside the EU?
Yes. The EU AI Act applies to any organisation operating in or serving the European market, regardless of where it is headquartered. McKinsey reports that 88% of organisations already use AI in at least one business function — meaning Article 4's reach is effectively global for any organisation with EU customers, employees, or operations.
August 2026 is a practical deadline,
not a distant one.

Savia's AI literacy learning paths are built for exactly what Article 4 requires — role-specific, documented, and designed to be updated as both the tools and the regulatory landscape develop. Whether you are building from scratch or auditing what you already have, the time to start is now.