Only around 12% of workers have had training specifically on AI, according to Pew Research, despite around half having undergone some form of training in the past year. That gap exists not because organisations are not investing, but because most AI training is designed for the wrong unit of analysis. It treats the organisation as the learner rather than the individual, producing content that is simultaneously too basic for some employees and too advanced for others. Genuinely useful to neither.

In 2026, this is also increasingly a regulatory question. The EU AI Act's Article 4 requires AI literacy training to be proportional to role and context: generic one-size-fits-all rollouts are unlikely to satisfy that obligation. What employees need depends on what they do, what tools they use, and what the consequences are when something goes wrong. This guide breaks that down practically, by role, with concrete scenarios and measurable outcomes. For the regulatory context behind that obligation, see what the EU AI Act means for your team's training.

Section 01

Why Generic Training Fails — What the Data Actually Tells Us

More than half of employees say they primarily use AI to double-check their work and draft emails or reports. Managers are using it for more strategic work: analysing data, conducting research, managing priorities. A Microsoft analysis found that AI adoption leads in IT, procurement, finance, and professional services, while marketing, sales, and operations lag behind significantly. That distribution is not accidental. It reflects who has received training designed for their role and who has not.

The consequence is organisations that have technically "done AI training" but whose teams are no more capable of catching an AI error, verifying an AI output, or knowing when to escalate than they were before. A 2026 Connext Global AI Oversight Report found that only 17% of US workers said AI is reliable without oversight, while nearly 40% of what appeared to be AI productivity gains were being lost to rework and low-quality output. Organisations that have ticked the training box are not protected from that failure rate. They are exposed to it.

12%
Pew Research via HR Dive — 2026
of workers have had training specifically on AI, despite around half having undergone some form of workplace training in the past year.
14%
PwC Global Workforce Hopes & Fears Survey, 2025, n=49,843
of workers globally use generative AI daily. Daily users are more likely to report tangible productivity benefits than infrequent users (92% vs 58%).

The gap between those two groups is not access to tools. It is practised judgment about how to use them. The solution is not more training. It is more targeted training. That starts with a foundation layer that every employee needs before anything role-specific can work.

Section 02

The Foundation Layer — What Everyone Needs First

Role-specific training builds on this. Without it, it builds on sand. These four capabilities belong in every employee's foundation, regardless of function, seniority, or how much AI they currently use.

Output verification
The professional habit of checking whether what AI produced is accurate, complete, and appropriate before using it. Not a technical skill. A judgment habit. Training for this means practice, not explanation — employees need to encounter wrong AI outputs and catch them, repeatedly, in low-stakes contexts before they encounter them in real work. Only 37% of workers say AI is right without fixes most of the time, and 28% say AI needs attention almost every time. That is the environment employees are walking into.
Data hygiene
What information can and cannot go into AI tools, and why. 29% of employees are unaware that entering data into AI tools may store or reuse it. Foundation training needs to make this concrete: not "be careful with sensitive data" but "here are three scenarios: which of these should you never paste into an AI tool, and what should you do instead?"
Knowing when to escalate
Which decisions should never rest on AI output alone. Training here means giving employees a simple decision framework they can apply in the moment, not a policy document to refer to later. What types of output require a human second opinion? What happens if you get it wrong?
Adaptability over tool mastery
The ability to transfer judgment from one AI tool to another as the landscape changes. Tools change every few months. Judgment compounds. Skills for AI-exposed jobs are already changing 66% faster than skills in other roles. Foundation training that teaches a specific tool rather than transferable evaluation habits will be outdated before it is deployed.

Foundation training can be built once and deployed everywhere, making it the highest-efficiency investment in any constrained program. But it is a floor, not a ceiling. What follows is where the real capability gap opens up, and where it differs substantially by role.

Section 03

Frontline Non-Technical Employees

Frontline employees are simultaneously the most exposed to AI-assisted workflows and the least served by available training. More than half of AI education programs require a degree and are designed for either engineers or executives, leaving the majority of the workforce without appropriate material. 51% of non-managers feel they have the resources they need for learning and development, compared to 66% of managers and 72% of senior executives. A structural imbalance that generic training cannot fix.

BCG's annual AI at Work survey found that only half of frontline employees regularly use AI tools — what BCG describes as a "silicon ceiling". Companies in financial services and technology have already moved beyond productivity gains to workflow redesign. Frontline employees are the ones being left behind by both the technology and the training.

What they need is not an explanation of what AI is. They need the ability to catch when something AI-generated is wrong and know what to do about it. The tools they encounter are mostly embedded in platforms they already use: suggested replies in customer service software, automated ticket routing, AI-assisted scheduling. Training that focuses on hypothetical AI tools misses the point entirely. Training that names the specific tools in their workflow and explains what the AI is doing in each one lands. Full detail in What AI Training Do Frontline Employees Actually Need?

Training Scenario
Zendesk AI — output verification for a customer service team
A customer service agent at a telecoms provider uses Zendesk's AI-suggested replies. The training module presents three AI-generated responses to a billing complaint. Response A confidently states that the customer's contract ends in March 2025 — a date that is wrong by eleven months. Response B uses accurate account details but recommends a credit the agent does not have authority to issue. Response C is accurate, within policy, and appropriate to send. The learner must classify each response and explain their reasoning before the correct answer is revealed.
Measurable outcome
Correctly classifies AI-suggested responses across factual accuracy, policy compliance, and authority boundaries in at least 4 out of 5 scenarios. That is a standard that protects both the customer and the organisation, not a comprehension test on AI concepts.
Section 04

Managers Team Leads

Managers are frequently the weakest link in an organisation's AI governance chain. Not because they are resistant, but because no one has trained them specifically for the oversight role. Microsoft's research confirms that AI adoption across industries is heavily shaped by social norms learned from leaders and peers. Leaders facilitate adoption through clear communication, demonstrating their own learning, and setting realistic expectations about what AI can accomplish. None of that is possible without training specifically designed for their role.

A manager who understands AI outputs well enough to question them creates a multiplier effect across their entire team. A manager who rubber-stamps AI outputs because they feel they should trust the tool does the opposite. SHRM's 2026 State of AI in HR report found that by 2025, 73% of those at HR director level and above had adopted AI, compared to 66% of managers and 65% of individual contributors. Adoption decreases the further down the hierarchy you go, despite the fact that frontline managers are the people most positioned to coach adoption on the ground.

The Oversight Problem

The most common failure mode is not a manager who distrusts AI. It is a manager who over-trusts it. Experts advising the World Economic Forum warn that the biggest risk for managers leading AI-augmented teams is productivity loss caused by insufficient oversight — output quality degrades without anyone noticing. They have been told AI is transformative. Nobody has told them what a bad AI output looks like in their domain.

Training Scenario
Workday AI — performance review oversight for a people manager
A team lead at a financial services firm uses Workday's AI-assisted performance summaries ahead of annual reviews. The module presents a summary for a team member described as "consistently high-performing with strong output across the year." The learner is shown the underlying data: Q3 performance was anomalously low, coinciding with a period of parental leave that the AI did not contextualise. The learner must identify what the AI missed, explain why the summary would be misleading in a review conversation, and name two additional data sources they would consult before using it.
Measurable outcome
Identifies the contextualisation failure, names the missing leave context, and lists at least two independent verification steps before using an AI-generated summary in a consequential HR decision. An auditable behaviour, not a statement of intent.
Section 05

Knowledge Workers Finance, Legal, Marketing, Operations

This is the group where generic training fails most visibly, and where the cost of that failure is most direct. A finance analyst who has completed a general AI awareness module can explain what a large language model is. They cannot necessarily tell the difference between a plausible-sounding financial projection generated by AI and a reliable one. Those are different skills. Only the second one matters professionally.

PwC's survey of nearly 50,000 workers found that daily generative AI users are more likely to report tangible productivity benefits than infrequent users (92% vs 58%). The gap between those groups is not access to tools. It is practised judgment about how to use them. Context engineering — the ability to bring deep domain expertise to the prompting process to get consistent, accurate AI outputs — is one of the most valuable and undertaught skills for this group. Full detail by function in What AI Training Do Knowledge Workers Actually Need?

Training Scenario
Microsoft Copilot — revenue forecast critique for a finance analyst
A finance analyst at a retail company uses Microsoft Copilot to draft a Q4 revenue forecast for a board presentation. The module presents Copilot's output, which contains three embedded errors: a factual error (the forecast uses Q2 revenue figures rather than the most recent Q3 actuals, overstating the baseline by 11%); a methodological error (growth rate is extrapolated linearly from a promotional quarter rather than normalised); and an unstated assumption (the model assumes flat headcount, which contradicts a planned redundancy programme that will affect cost structure). The learner must identify and explain each one. The module does not flag them.
Measurable outcome
Identifies all three error types and explains their compounding effect on the forecast's reliability. This is not an AI literacy objective. It is a professional finance competency that happens to involve AI. Which is precisely what role-specific training should test.
Section 06

Executives Senior Leaders

There is a documented gap between executive AI adoption and employee utilisation. BCG's global survey found that leaders who demonstrate strong support for AI make frontline employees more likely to use it regularly, enjoy their jobs, and feel positively about their careers. But that support is frequently expressed as general enthusiasm rather than demonstrated capability.

What executives need is strategic literacy: where AI creates genuine value in their function, what the regulatory obligations are, how to lead teams through change, and how to ask the right questions rather than defer to whoever shouts loudest about AI in the organisation. 67% of respondents cite lack of awareness of AI capabilities as the largest barrier to adoption — the top reason by a considerable margin. That statistic applies equally to executives who believe awareness is sufficient. The executive who cannot evaluate an AI claim independently will consistently be outmanoeuvred by the person who can frame one persuasively. Full detail in What AI Training Do Senior Leaders Actually Need?

Training Scenario
Vendor due diligence — AI risk assessment for a Chief Procurement Officer
A CPO at a logistics company is reviewing an AI-generated vendor risk assessment recommending approval of a SaaS provider for payroll processing. The module presents the output alongside three governance questions the executive should be asking but has not: Who validated the risk scoring model's accuracy for payroll-sector vendors? Does the tool's training data include recent insolvency cases for this supplier category? What human review step exists before this recommendation influences a £2m contract decision? The learner must identify which questions are absent from the original assessment and explain the business risk each gap creates.
Measurable outcome
Identifies all three governance gaps and articulates the specific risk each creates for the procurement decision. The standard that converts executive AI literacy from general enthusiasm into a governance capability the board can rely on.
Section 07

HR Professionals Highest Regulatory Risk

HR is one of the highest-risk functions under the EU AI Act. AI used in recruitment, performance management, and workforce planning sits squarely within Annex III's high-risk classification, meaning the people operating these tools have legal obligations around oversight that go beyond best practice. The consequences of getting it wrong here are not internal. They are legal, and they are already arriving.

The mechanism behind AI recruitment bias is not subtle once you understand it. A tool trained on historical hiring data learns, in effect, who your organisation has hired in the past. If those hires skew toward a particular demographic — by gender, age, educational background, or postcode — the model weights future candidates accordingly, without any explicit instruction to do so. An HR professional who understands this can interrogate shortlists. One who does not will approve them.

Amazon discovered this the hard way. The company was forced to scrap its AI-driven recruitment tool after finding it penalised resumes containing the word "women" — as in "women's chess club captain" — because the model had been trained predominantly on male applicants. The tool learned exactly what it was trained on. In May 2025, a US federal court granted preliminary class certification in a lawsuit alleging Workday's AI screening system engaged in a pattern of discrimination. Full detail in What AI Training Do HR Teams Actually Need?

⚠ The EU AI Act Exposure

AI tools used in recruitment and performance management are Annex III high-risk systems. The deployer — your organisation — is responsible for human oversight, worker notification, and maintaining records that demonstrate compliance. HR workers report the highest rework rate (38%) of any function. This suggests AI outputs in this domain are being relied on without sufficient scrutiny, not that appropriate oversight is being applied.

Training Scenario
Eightfold.ai — shortlist bias audit for an HR Business Partner
An HR Business Partner at a professional services firm is reviewing AI-generated shortlists for a senior analyst role produced by Eightfold.ai. The module presents two shortlists of eight candidates each: one from the AI, one from a human recruiter covering the same applicant pool. Three differences are visible: the AI shortlist excludes all candidates from non-Russell Group universities; it ranks two candidates with career gaps below equivalently qualified candidates without gaps; and it surfaces no candidates from a specific postcode cluster that maps to a majority-minority area of the city. The learner must assess whether each difference reflects genuine quality signals or systematic bias, and state what additional information they would request before the shortlist is used.
Measurable outcome
Correctly identifies all three potential bias signals, distinguishes between quality-based and systematic filtering, and names at least one verification step per difference. The kind of audit that can be demonstrated to a regulator, not just described in a policy document.
Section 08

Compliance & Legal The Evidentiary Standard

Compliance teams are increasingly the people inside organisations who will be asked to demonstrate EU AI Act conformity — including evidence that staff using AI have been appropriately trained. They cannot do that job if their own AI literacy is limited to general awareness. Article 4 breaches will likely be taken into account by regulators when considering penalties for other violations. Compliance professionals need to understand not just the obligation but the evidentiary standard: what does documented AI literacy actually look like to a national market surveillance authority?

In August 2023, the EEOC settled the first-of-its-kind AI employment discrimination case against iTutorGroup, which had programmed its recruitment software to automatically reject applicants based on age. The EEOC Chair stated that employers cannot rely on AI to discriminate against applicants on the basis of protected characteristics. The compliance implication is direct: employer liability for AI-assisted decisions does not transfer to the software vendor. A compliance professional who cannot identify when an AI workflow is creating regulatory exposure — before a claim is filed — is not meeting the oversight obligation.

What this role needs is governance literacy: understanding how AI changes the risk and accountability landscape, how to document AI-assisted decisions, and how to identify when an AI workflow is creating regulatory exposure the organisation has not assessed.

Training Scenario
Harvey AI — contract risk assessment for a compliance manager
A compliance manager at a financial services firm is reviewing a GDPR data processing agreement for a new cloud payroll vendor. Harvey AI has flagged the contract as "low risk." The module asks the learner to identify three governance questions that should be asked before accepting that assessment: Is the AI's risk scoring model validated for GDPR data processing agreements specifically, or trained on general commercial contracts? Does the assessment flag sub-processor arrangements, which are where most payroll-sector GDPR exposure sits? Who in the organisation has accountability if Harvey misses a material clause and a data breach follows? The learner must articulate each question and explain the regulatory exposure each gap creates.
Measurable outcome
Identifies all three governance gaps and explains the specific GDPR exposure each creates. That is the evidentiary standard a national market surveillance authority will apply when assessing whether this organisation's compliance function understood its Article 4 obligations.
Section 09

When Resources Force a Choice — A Prioritisation Framework

The case for role-differentiated training is strong. The organisational reality is that most L&D teams are working with constrained budgets, stretched headcount, LMS limitations that make bespoke content expensive, and leadership timelines that push toward speed over quality. A generic rollout is not always a failure of ambition. Sometimes it is the only feasible option in the available window.

If you can only build one role-specific cluster before a deadline, here is the logic for which one to prioritise and why the order is what it is.

Role Priority capability Training scenario Why first
HR professionals Bias recognition + legal obligations Audit AI shortlist for systematic bias before hire Highest legal risk. Mobley v. Workday establishes employer liability. Generic training is least defensible for Annex III systems.
Compliance / legal Governance literacy Apply governance framework to AI risk output Direct regulatory exposure. Cannot demonstrate Article 4 conformity with general awareness. iTutorGroup EEOC settlement confirms vendor liability does not transfer.
Managers Oversight judgment Audit AI performance summary before use Highest multiplier. BCG confirms leader support directly increases frontline AI adoption. One trained manager extends impact across their whole team.
Knowledge workers Domain-specific critique Identify errors in AI-generated analysis Highest visible risk. AI-assisted outputs reaching clients or regulators carry direct external consequences.
Senior leaders Evaluative framework Apply governance questions to AI-informed decision Governance credibility. 67% of organisations cite lack of leadership AI awareness as the top adoption barrier (SHRM 2026).
Frontline employees Output verification Identify which AI responses require human review Foundation scale. Build once, deploy everywhere — highest efficiency per resource spent.

Foundation training for all employees is not optional, but it can be built once and deployed everywhere, which makes it the highest-efficiency investment regardless of constraints. The role-specific layers above it are where the legal, operational, and governance exposure actually lives.

AI Training by Role — Design Checklist
Foundation layer is in place — output verification, data hygiene, escalation decisions, and adaptability — before any role-specific content is deployed.
Each role section has a concrete scenario, not just a learning objective, drawn from their actual workflow rather than a hypothetical.
Measurable outcomes are defined by role — specific observable behaviours, not "understands AI limitations."
Managers have received oversight-specific training, not the same module as individual contributors.
HR and compliance functions have role-specific content covering bias recognition and the EU AI Act obligations that apply to their function directly.
Executives have been trained to evaluate AI claims, not just receive AI outputs, with a framework they can apply consistently across decisions.
Training content references the specific tools employees use, not hypothetical AI systems unconnected to their workflow.
Frequently Asked Questions
AI Training by Role — Common Questions
Answers to the questions L&D leads, HR teams, and compliance functions most commonly ask when designing role-specific AI training programs.
What AI training do all employees need regardless of role?
Every employee needs four foundational capabilities before role-specific training can be effective: output verification (the habit of checking whether AI-generated content is accurate before using it, which matters because only 37% of workers say AI is right without fixes most of the time); data hygiene (knowing what information cannot go into AI tools and why); knowing when to escalate (which decisions should never rest on AI output alone); and adaptability over tool mastery (the judgment to transfer skills between tools, given that AI-exposed job skills are already changing 66% faster than other roles). Without this foundation, role-specific training builds on sand.
What AI training do managers need specifically?
Managers need applied literacy plus oversight capability. Microsoft's research confirms that leaders facilitate AI adoption through clear communication, demonstrating their own learning, and setting realistic expectations. None of this is possible without training specifically designed for their oversight role. Concretely, manager training should involve scenario-based practice: given an AI-generated performance summary, what would you verify before using it in a review conversation? The learning objective is an audit behaviour that can be observed, not a comprehension test on oversight principles.
What AI training do HR professionals need?
HR professionals need applied literacy with specific depth in bias recognition and the legal obligations that attach to AI-assisted employment decisions. HR is one of the highest-risk functions under the EU AI Act. The 2025 preliminary class certification in Mobley v. Workday established that employer liability for AI-assisted hiring discrimination does not transfer to the software vendor. Training should include practice in auditing AI-generated shortlists for systematic bias — the kind of audit that can be demonstrated to a regulator, not just described in a policy document. See the EU AI Act deployer obligations guide for the full picture of what this requires in practice.
What AI training do compliance and legal teams need?
Compliance and legal functions need governance literacy: understanding how AI changes the risk and accountability landscape, how to document AI-assisted decisions, and how to identify when an AI workflow is creating regulatory exposure. The EEOC's settlement with iTutorGroup established that employer liability for AI-assisted decisions does not transfer to the software vendor. Compliance teams cannot demonstrate EU AI Act conformity if their own AI literacy is limited to general awareness. Training should focus on applying structured governance frameworks to AI-critical outputs.
If budget forces a choice, which role should be trained first?
Three tiers of priority. Highest legal risk first: HR and compliance, where generic training is least defensible under the EU AI Act and where active litigation (Mobley v. Workday) has established employer liability for AI-assisted hiring discrimination. Highest multiplier effect second: managers, where BCG confirms that leader support directly increases frontline AI adoption. Highest visible business risk third: knowledge workers whose AI-assisted outputs reach clients, customers, or regulators. Foundation training for all employees is not optional, but it can be built once and deployed everywhere, making it the highest-efficiency investment regardless of constraints.
Role-specific AI training is not a premium option.
It is the minimum standard that produces measurable change.

Savia's AI learning paths are built around this logic: practical, role-specific capability that you can observe, assess, and report on.