Most conversations about the EU AI Act focus on the technology — prohibited practices, high-risk systems, conformity assessments. What receives far less attention is the obligation that applies to almost every organisation right now, regardless of sector, size, or how sophisticated their AI use actually is.
A 2026 readiness analysis by Vision Compliance found 78% of enterprises are unprepared for their EU AI Act obligations — and the most commonly missed obligation is not a technical one. It is Article 4: the legal requirement to ensure that all staff working with AI systems have a sufficient level of AI literacy. Article 4 has been in force since 2 February 2025. The majority of companies in Europe do not even know it exists.
This article explains what Article 4 requires, what the penalty structure looks like, who it applies to, and what your organisation needs to do before national enforcement begins in August 2026.
The EU AI Act — Timeline and Scope
The EU AI Act (Regulation EU 2024/1689) is the world's first comprehensive legal framework for artificial intelligence. It entered into force on 1 August 2024 and applies a risk-based approach: the higher the risk an AI system poses, the stricter the obligations on the organisations deploying it.
Implementation is staggered across four phases. The Article 4 literacy obligation sits at the earliest point — it is not a future requirement. It is already law.
The Act applies to any organisation operating in or serving the European market — not just EU-headquartered companies. McKinsey reports that 88% of organisations already use AI in at least one business function — meaning Article 4's reach is practically global for any organisation with EU customers, employees, or operations.
What Article 4 Actually Requires
Three things define the practical scope of this obligation.
Who it applies to. Article 4 affects any organisation that uses AI systems regardless of size or sector — law firms using AI-powered document review, hospitals with diagnostic support systems, HR departments filtering CVs with algorithms, marketing teams generating content with generative AI. If anyone in your company uses ChatGPT, Copilot, or Gemini, Article 4 applies. The average number of undocumented AI tools found per company in compliance audits is between 5 and 12 — most installed by employees without IT or management awareness.
What "sufficient" means. The European Commission has clarified that AI literacy means skills, knowledge, and understanding that allow providers, deployers, and affected persons to make an informed deployment of AI systems and gain awareness about the opportunities and risks of AI and possible harm it can cause. It is not about turning everyone into a machine learning engineer. It is about ensuring people who work with AI understand it well enough to use it responsibly.
Training must be proportional to role. Generic awareness training for all employees is unlikely to be sufficient on its own. The regulation explicitly requires training measures to be adapted to each person's level, experience, and the context of the AI systems they use. A customer service agent using an AI response tool needs different training from a compliance officer overseeing an AI-assisted risk assessment. For a breakdown of what different roles need, our guide to what AI training employees need maps this by function.
The Penalty Structure — What Non-Compliance Actually Costs
The EU AI Act establishes three tiers of fines under Article 99, calibrated to the severity of the violation.
To put these figures in context: 7% of global revenue would cost Meta approximately $8.5 billion, Google $14 billion, and Microsoft $16 billion based on 2024 financials. Even at the 1% tier, for a company with €50 million in revenue, that is €500,000.
No direct fine applies for violating Article 4 alone. However, from August 2025 organisations may face civil liability if the use of AI systems by inadequately trained staff causes harm to consumers, business partners, or other third parties.
More significantly, Article 4 breaches will be taken into account by regulators when considering penalties for other violations. Inadequate training will not trigger a standalone fine — but it makes every other violation more expensive and substantially weakens any compliance defence. Beyond fines, the Act allows employees, rejected job candidates, or consumer associations to file complaints with the national authority. A regulatory investigation is reputational risk as much as financial risk.
What the High-Risk Delay Means — and What It Doesn't
The March 2026 announcement that the EU Council agreed to delay certain high-risk AI system requirements was widely reported as providing businesses with relief. The nuance is important.
The delay responds to two problems: the European Commission missed its February 2026 deadline to publish technical guidance on high-risk AI classification, and only 8 of the 27 EU member states had designated their national contact points. In that context, applying full high-risk obligations without a functioning supervisory infrastructure would have created compliance requirements with no clear mechanism for demonstrating conformity.
The Article 4 literacy obligation is unaffected by this delay. It was in force from February 2025. August 2026 is when national authorities begin enforcing it — which means the window to demonstrate good-faith compliance is closing, not extending.
Demonstrating that you have been working toward compliance — even if not yet fully compliant — is a significant mitigating factor in penalty calculations. Documented good-faith effort matters. Inaction does not. The organisations that will face the most exposure are not those that tried and fell short. They are those that never started.
The Scale of the Compliance Gap
The readiness data makes for uncomfortable reading across every dimension of Article 4 compliance.
These figures reflect both the scale of what Article 4 is asking organisations to address and how far most are from meeting that obligation in any way that would withstand regulatory scrutiny. Only 32% of employees have received formal AI training at work — which means that for most organisations, the compliance gap is not a gap at the margins. It is the baseline condition.
What This Means Practically for Your Organisation
Translating Article 4 into what an L&D, HR, or compliance lead actually needs to do before August 2026 — in sequence.
The Broader Context — What Comes Next
Article 4 is the floor, not the ceiling. As August 2026 arrives with transparency obligations and high-risk system requirements — even with some categories delayed to December 2027 — organisations that have not covered the literacy baseline will face a harder compliance problem for everything that follows.
Article 14 and Article 4 are structurally linked. Article 14 requires that people using high-risk AI systems have the skills, knowledge, and authority to understand, monitor, and override those systems. You cannot demonstrate meaningful human oversight with an undertrained workforce — failing one obligation makes demonstrating the other harder.
For organisations outside the EU, the Act's extraterritorial reach means this is not a purely European concern. EY's global survey found that the majority of C-suite leaders consider non-compliance with AI regulations to be the most common AI risk they face — and with the UK, US, and other jurisdictions developing their own AI governance frameworks, investment in AI literacy training is increasingly a global baseline rather than a regional compliance exercise. For the full regulatory landscape, the AI risks and regulations every leader must know covers what the current framework means for organisations of different types and sizes. For the operational risks that Article 4-quality training directly addresses, 5 forms of AI bias hiding in your daily workflow illustrates what happens when employees lack the judgment to catch what AI gets wrong.
Quick Reference Checklist
For compliance, HR, and L&D leads assessing current Article 4 readiness.
Savia's AI literacy learning paths are built for exactly what Article 4 requires — role-specific, documented, and designed to be updated as both the tools and the regulatory landscape develop. Whether you are building from scratch or auditing what you already have, the time to start is now.