BCG's 2025 research finds that only 25% of employees say they have received sufficient AI training from their employer — despite the majority of organisations having adopted at least one AI tool. The gap is not investment. The gap is specificity.
What AI training do employees actually need? Not another awareness session about what AI is. Employees need training on the specific tools already embedded in their daily workflows, the judgment to evaluate what those tools produce, and the role-specific skills to apply AI safely in their particular function. This guide breaks that down concretely: the tools, the knowledge gaps, and the four skills that matter most regardless of which products your organisation uses.
It is designed to sit alongside the complete guide to building an AI training programme, which covers programme design, assessment, and governance, rather than repeat it.
Why AI Awareness Is No Longer Enough
There is a difference between knowing AI exists and knowing how to use it well enough that your organisation benefits rather than takes on risk. Awareness training covers the first. It rarely touches the second. The gap between the two has become more consequential as AI has moved from an optional experiment to an embedded feature of most workplace tools.
The tools are already embedded in most employees' daily workflows, often without their full awareness. Microsoft Copilot is built into Office 365. Google Gemini is integrated into Workspace. ChatGPT is being used whether IT has approved it or not. The question is not whether employees are using AI. It is whether they understand what they are doing when they use it — and whether they can tell when it is getting something wrong.
What AI literacy actually means is not familiarity with tools. It is the combination of domain knowledge, critical judgment, and practical habits that allows someone to use AI effectively and catch it when it fails. Awareness training develops none of these.
The Tools Employees Are Already Using — and What They Need to Know
Most training programmes skip this section entirely. They cover AI in general terms without addressing the specific tools employees encounter daily — which means employees leave with improved conceptual awareness and no better equipped for their actual working environment. The tool categories below are organised by type rather than brand, because the specific product matters less than understanding what kind of AI is involved and what its characteristic failure modes are.
These are the tools most employees encounter first and use most frequently. They are versatile, powerful, and for users who have not been trained in how they work, genuinely risky in proportion to how confident they sound. The core issue is that these models produce incorrect information with exactly the same fluency and apparent certainty as accurate information. An employee who does not know this will treat the output as authoritative. An employee who does know this will treat it as a well-informed first draft that requires verification.
Unlike standalone chat tools, these are woven directly into the applications employees already use. Many employees are using these features without fully registering that they are interacting with AI at all. A meeting summary appears in Teams. A reply suggestion appears in Gmail. A document outline appears in Notion. The interaction feels like a standard product feature, not an AI output requiring scrutiny.
Meeting transcription and summarisation tools are now standard in most enterprise environments. They record, transcribe, and summarise conversations, often automatically, sometimes without all participants being aware. The legal and reputational exposure here is distinct from other AI categories: it involves data belonging to other people, recorded without explicit consent, processed by third-party systems with their own data retention practices.
For non-technical employees who work with data, AI is increasingly surfacing insights, generating formulas, and producing charts automatically. This is where the stakes of uncritical AI use are highest. A wrong number presented confidently in a board report or client proposal is a significant organisational risk — and the confidence with which AI presents numbers makes them harder to question, not easier.
Meta's Galactica model, developed to assist with scientific research, produced outputs that were coherent, plausible, and wrong. In testing, it fabricated citations, generated confident-sounding but fictitious scientific claims, and produced text that passed casual review as authoritative. It was taken offline within three days of public release. The lesson applies to every AI tool handling analytical or factual content: the more sophisticated the presentation, the less likely an uncritical reviewer is to question it.
Beyond the universal tool categories above, employees in specific functions encounter AI embedded in the products their roles depend on daily. This is where training stakes are often highest, because the AI is operating in domain-specific contexts where errors carry professional, legal, or reputational consequences, and where the employee may be the only person in a position to catch them.
Research from 2026 shows that managers are already using AI for strategic tasks including data analysis, talent decisions, and priority management, while most individual contributors are still at the drafting-and-checking stage. That gap will narrow as tools become more embedded in role-specific platforms — which makes role-specific training urgent, not optional.
HR teams using AI in applicant tracking systems such as Workday and Greenhouse need to understand bias risks in AI-assisted candidate screening. Amazon's internal hiring tool, trained on historical resumes, systematically downranked candidates who attended women's colleges or listed women's organisations on their profile — without any explicit instruction to do so. It was scrapped after the pattern was identified internally. The same risk exists in any tool trained on historical hiring data, and it requires human review at the screening stage, not just at the offer stage.
Marketing teams using tools like Jasper or HubSpot AI need to understand what happens when AI-generated content goes out without adequate review. In 2023, CNET published dozens of AI-generated financial explainers containing factual errors that passed through unchecked because the output looked polished and authoritative. CNET issued public corrections, the press coverage was extensive, and the reputational damage outlasted the corrections by months. AI tools have no mechanism for flagging when a claim they generate is unverifiable. That is the reviewer's job, and it only gets done if the reviewer has been trained to do it.
Customer service teams using AI response suggestions and chatbots need to know when to override and escalate. In 2024, Air Canada's chatbot gave a passenger incorrect information about bereavement fares, telling him he could apply for a discounted rate retrospectively when no such policy existed. He booked flights in reliance on that advice. Air Canada argued the chatbot was a separate legal entity responsible for its own statements. The BC Civil Resolution Tribunal rejected this outright and ordered Air Canada to pay the passenger CAD $812 in compensation plus additional expenses, establishing that organisations are fully responsible for everything their AI publishes regardless of whether a human reviewed it.
Finance teams using Excel Copilot or Power BI AI features need to understand model assumptions and verification requirements before any AI-assisted figures reach a board pack or a regulatory submission. A number that arrives formatted as an AI-generated chart carries the same verification obligation as a number in a raw spreadsheet — and is considerably easier to present without scrutiny. For the full accountability framework across all of these functions, AI content accountability: how to manage risk, quality, and ownership covers it in detail.
The AI Skills Employees Need — Regardless of Which Tools They Use
Tools change every few months. The AI skills employees need in 2026 and beyond are the ones that compound over years. A team trained in these four skills will adapt faster to new tools, catch more errors, and make more accountable decisions regardless of what the AI landscape looks like in twelve months.
What This Looks Like by Role — A Quick Reference
The table below maps each organisational role to the AI tools they most commonly encounter and the training focus that will make the most difference. It is a starting point for programme design rather than a complete specification. The fuller treatment of how to build role-specific training is in the complete guide to AI training for employees.
The universal foundation row — output verification and data hygiene for all employees using general-purpose tools — is not optional padding. It is the layer that prevents the most common and costly AI errors across every function. An organisation that prioritises role-specific training without first building this foundation is optimising the advanced layer before the foundational one is in place.
The most effective training sequences build the universal foundation first — output verification and data hygiene for everyone — then layer in role-specific capability for the functions where stakes are highest. Trying to deliver role-specific training to employees who do not yet have the foundational habits produces capability built on a fragile base.
The assessment methods for establishing where your team currently sits — before designing any training — are covered in assessing AI learning gaps in your organisation.
Savia's AI literacy learning paths are designed to build the role-specific capability that turns tool access into genuine competence — for the people who actually need to change how they work, not just the ones already fluent in AI.