Only around 12% of workers have had training specifically on AI, according to Pew Research, despite around half having undergone some form of training in the past year. That gap exists not because organisations are not investing, but because most AI training is designed for the wrong unit of analysis. It treats the organisation as the learner rather than the individual, producing content that is simultaneously too basic for some employees and too advanced for others. Genuinely useful to neither.
In 2026, this is also increasingly a regulatory question. The EU AI Act's Article 4 requires AI literacy training to be proportional to role and context: generic one-size-fits-all rollouts are unlikely to satisfy that obligation. What employees need depends on what they do, what tools they use, and what the consequences are when something goes wrong. This guide breaks that down practically, by role, with concrete scenarios and measurable outcomes. For the regulatory context behind that obligation, see what the EU AI Act means for your team's training.
Why Generic Training Fails — What the Data Actually Tells Us
More than half of employees say they primarily use AI to double-check their work and draft emails or reports. Managers are using it for more strategic work: analysing data, conducting research, managing priorities. A Microsoft analysis found that AI adoption leads in IT, procurement, finance, and professional services, while marketing, sales, and operations lag behind significantly. That distribution is not accidental. It reflects who has received training designed for their role and who has not.
The consequence is organisations that have technically "done AI training" but whose teams are no more capable of catching an AI error, verifying an AI output, or knowing when to escalate than they were before. A 2026 Connext Global AI Oversight Report found that only 17% of US workers said AI is reliable without oversight, while nearly 40% of what appeared to be AI productivity gains were being lost to rework and low-quality output. Organisations that have ticked the training box are not protected from that failure rate. They are exposed to it.
The gap between those two groups is not access to tools. It is practised judgment about how to use them. The solution is not more training. It is more targeted training. That starts with a foundation layer that every employee needs before anything role-specific can work.
The Foundation Layer — What Everyone Needs First
Role-specific training builds on this. Without it, it builds on sand. These four capabilities belong in every employee's foundation, regardless of function, seniority, or how much AI they currently use.
Foundation training can be built once and deployed everywhere, making it the highest-efficiency investment in any constrained program. But it is a floor, not a ceiling. What follows is where the real capability gap opens up, and where it differs substantially by role.
Frontline Non-Technical Employees
Frontline employees are simultaneously the most exposed to AI-assisted workflows and the least served by available training. More than half of AI education programs require a degree and are designed for either engineers or executives, leaving the majority of the workforce without appropriate material. 51% of non-managers feel they have the resources they need for learning and development, compared to 66% of managers and 72% of senior executives. A structural imbalance that generic training cannot fix.
BCG's annual AI at Work survey found that only half of frontline employees regularly use AI tools — what BCG describes as a "silicon ceiling". Companies in financial services and technology have already moved beyond productivity gains to workflow redesign. Frontline employees are the ones being left behind by both the technology and the training.
What they need is not an explanation of what AI is. They need the ability to catch when something AI-generated is wrong and know what to do about it. The tools they encounter are mostly embedded in platforms they already use: suggested replies in customer service software, automated ticket routing, AI-assisted scheduling. Training that focuses on hypothetical AI tools misses the point entirely. Training that names the specific tools in their workflow and explains what the AI is doing in each one lands. Full detail in What AI Training Do Frontline Employees Actually Need?
Managers Team Leads
Managers are frequently the weakest link in an organisation's AI governance chain. Not because they are resistant, but because no one has trained them specifically for the oversight role. Microsoft's research confirms that AI adoption across industries is heavily shaped by social norms learned from leaders and peers. Leaders facilitate adoption through clear communication, demonstrating their own learning, and setting realistic expectations about what AI can accomplish. None of that is possible without training specifically designed for their role.
A manager who understands AI outputs well enough to question them creates a multiplier effect across their entire team. A manager who rubber-stamps AI outputs because they feel they should trust the tool does the opposite. SHRM's 2026 State of AI in HR report found that by 2025, 73% of those at HR director level and above had adopted AI, compared to 66% of managers and 65% of individual contributors. Adoption decreases the further down the hierarchy you go, despite the fact that frontline managers are the people most positioned to coach adoption on the ground.
Knowledge Workers Finance, Legal, Marketing, Operations
This is the group where generic training fails most visibly, and where the cost of that failure is most direct. A finance analyst who has completed a general AI awareness module can explain what a large language model is. They cannot necessarily tell the difference between a plausible-sounding financial projection generated by AI and a reliable one. Those are different skills. Only the second one matters professionally.
PwC's survey of nearly 50,000 workers found that daily generative AI users are more likely to report tangible productivity benefits than infrequent users (92% vs 58%). The gap between those groups is not access to tools. It is practised judgment about how to use them. Context engineering — the ability to bring deep domain expertise to the prompting process to get consistent, accurate AI outputs — is one of the most valuable and undertaught skills for this group. Full detail by function in What AI Training Do Knowledge Workers Actually Need?
Executives Senior Leaders
There is a documented gap between executive AI adoption and employee utilisation. BCG's global survey found that leaders who demonstrate strong support for AI make frontline employees more likely to use it regularly, enjoy their jobs, and feel positively about their careers. But that support is frequently expressed as general enthusiasm rather than demonstrated capability.
What executives need is strategic literacy: where AI creates genuine value in their function, what the regulatory obligations are, how to lead teams through change, and how to ask the right questions rather than defer to whoever shouts loudest about AI in the organisation. 67% of respondents cite lack of awareness of AI capabilities as the largest barrier to adoption — the top reason by a considerable margin. That statistic applies equally to executives who believe awareness is sufficient. The executive who cannot evaluate an AI claim independently will consistently be outmanoeuvred by the person who can frame one persuasively. Full detail in What AI Training Do Senior Leaders Actually Need?
HR Professionals Highest Regulatory Risk
HR is one of the highest-risk functions under the EU AI Act. AI used in recruitment, performance management, and workforce planning sits squarely within Annex III's high-risk classification, meaning the people operating these tools have legal obligations around oversight that go beyond best practice. The consequences of getting it wrong here are not internal. They are legal, and they are already arriving.
The mechanism behind AI recruitment bias is not subtle once you understand it. A tool trained on historical hiring data learns, in effect, who your organisation has hired in the past. If those hires skew toward a particular demographic — by gender, age, educational background, or postcode — the model weights future candidates accordingly, without any explicit instruction to do so. An HR professional who understands this can interrogate shortlists. One who does not will approve them.
Amazon discovered this the hard way. The company was forced to scrap its AI-driven recruitment tool after finding it penalised resumes containing the word "women" — as in "women's chess club captain" — because the model had been trained predominantly on male applicants. The tool learned exactly what it was trained on. In May 2025, a US federal court granted preliminary class certification in a lawsuit alleging Workday's AI screening system engaged in a pattern of discrimination. Full detail in What AI Training Do HR Teams Actually Need?
AI tools used in recruitment and performance management are Annex III high-risk systems. The deployer — your organisation — is responsible for human oversight, worker notification, and maintaining records that demonstrate compliance. HR workers report the highest rework rate (38%) of any function. This suggests AI outputs in this domain are being relied on without sufficient scrutiny, not that appropriate oversight is being applied.
Compliance & Legal The Evidentiary Standard
Compliance teams are increasingly the people inside organisations who will be asked to demonstrate EU AI Act conformity — including evidence that staff using AI have been appropriately trained. They cannot do that job if their own AI literacy is limited to general awareness. Article 4 breaches will likely be taken into account by regulators when considering penalties for other violations. Compliance professionals need to understand not just the obligation but the evidentiary standard: what does documented AI literacy actually look like to a national market surveillance authority?
In August 2023, the EEOC settled the first-of-its-kind AI employment discrimination case against iTutorGroup, which had programmed its recruitment software to automatically reject applicants based on age. The EEOC Chair stated that employers cannot rely on AI to discriminate against applicants on the basis of protected characteristics. The compliance implication is direct: employer liability for AI-assisted decisions does not transfer to the software vendor. A compliance professional who cannot identify when an AI workflow is creating regulatory exposure — before a claim is filed — is not meeting the oversight obligation.
What this role needs is governance literacy: understanding how AI changes the risk and accountability landscape, how to document AI-assisted decisions, and how to identify when an AI workflow is creating regulatory exposure the organisation has not assessed.
When Resources Force a Choice — A Prioritisation Framework
The case for role-differentiated training is strong. The organisational reality is that most L&D teams are working with constrained budgets, stretched headcount, LMS limitations that make bespoke content expensive, and leadership timelines that push toward speed over quality. A generic rollout is not always a failure of ambition. Sometimes it is the only feasible option in the available window.
If you can only build one role-specific cluster before a deadline, here is the logic for which one to prioritise and why the order is what it is.
| Role | Priority capability | Training scenario | Why first |
|---|---|---|---|
| HR professionals | Bias recognition + legal obligations | Audit AI shortlist for systematic bias before hire | Highest legal risk. Mobley v. Workday establishes employer liability. Generic training is least defensible for Annex III systems. |
| Compliance / legal | Governance literacy | Apply governance framework to AI risk output | Direct regulatory exposure. Cannot demonstrate Article 4 conformity with general awareness. iTutorGroup EEOC settlement confirms vendor liability does not transfer. |
| Managers | Oversight judgment | Audit AI performance summary before use | Highest multiplier. BCG confirms leader support directly increases frontline AI adoption. One trained manager extends impact across their whole team. |
| Knowledge workers | Domain-specific critique | Identify errors in AI-generated analysis | Highest visible risk. AI-assisted outputs reaching clients or regulators carry direct external consequences. |
| Senior leaders | Evaluative framework | Apply governance questions to AI-informed decision | Governance credibility. 67% of organisations cite lack of leadership AI awareness as the top adoption barrier (SHRM 2026). |
| Frontline employees | Output verification | Identify which AI responses require human review | Foundation scale. Build once, deploy everywhere — highest efficiency per resource spent. |
Foundation training for all employees is not optional, but it can be built once and deployed everywhere, which makes it the highest-efficiency investment regardless of constraints. The role-specific layers above it are where the legal, operational, and governance exposure actually lives.
Savia's AI learning paths are built around this logic: practical, role-specific capability that you can observe, assess, and report on.