BCG's 2025 research finds that only 25% of employees say they have received sufficient AI training from their employer — despite the majority of organisations having adopted at least one AI tool. The gap is not investment. The gap is specificity.

What AI training do employees actually need? Not another awareness session about what AI is. Employees need training on the specific tools already embedded in their daily workflows, the judgment to evaluate what those tools produce, and the role-specific skills to apply AI safely in their particular function. This guide breaks that down concretely: the tools, the knowledge gaps, and the four skills that matter most regardless of which products your organisation uses.

It is designed to sit alongside the complete guide to building an AI training programme, which covers programme design, assessment, and governance, rather than repeat it.

Section 01

Why AI Awareness Is No Longer Enough

There is a difference between knowing AI exists and knowing how to use it well enough that your organisation benefits rather than takes on risk. Awareness training covers the first. It rarely touches the second. The gap between the two has become more consequential as AI has moved from an optional experiment to an embedded feature of most workplace tools.

74%
Salesforce — State of the AI-Connected Customer, 2025
of employees say they need better training to use AI effectively in their roles. Access to tools is not the constraint. Knowing how to apply them well is.
55%
Microsoft — Work Trend Index, 2025
of employees who use AI at work say they lack the skills to use it effectively for their specific job. General awareness training does not close that gap. Role-specific capability does.

The tools are already embedded in most employees' daily workflows, often without their full awareness. Microsoft Copilot is built into Office 365. Google Gemini is integrated into Workspace. ChatGPT is being used whether IT has approved it or not. The question is not whether employees are using AI. It is whether they understand what they are doing when they use it — and whether they can tell when it is getting something wrong.

What AI literacy actually means is not familiarity with tools. It is the combination of domain knowledge, critical judgment, and practical habits that allows someone to use AI effectively and catch it when it fails. Awareness training develops none of these.

Section 02

The Tools Employees Are Already Using — and What They Need to Know

Most training programmes skip this section entirely. They cover AI in general terms without addressing the specific tools employees encounter daily — which means employees leave with improved conceptual awareness and no better equipped for their actual working environment. The tool categories below are organised by type rather than brand, because the specific product matters less than understanding what kind of AI is involved and what its characteristic failure modes are.

General-purpose AI assistants — ChatGPT, Microsoft Copilot, Google Gemini

These are the tools most employees encounter first and use most frequently. They are versatile, powerful, and for users who have not been trained in how they work, genuinely risky in proportion to how confident they sound. The core issue is that these models produce incorrect information with exactly the same fluency and apparent certainty as accurate information. An employee who does not know this will treat the output as authoritative. An employee who does know this will treat it as a well-informed first draft that requires verification.

Tool Category 01
General-purpose AI assistants
ChatGPT · Microsoft Copilot · Google Gemini
What employees need to understand
How these tools generate responses and why they can be confidently wrong. They predict plausible text, not verified facts. The confidence of the output has no relationship to its accuracy.
How to write prompts that produce useful, specific outputs rather than generic ones. In 2026, prompt literacy is as essential a workplace skill as writing a clear email.
What data should never go into a public version of these tools. In 2023, Samsung engineers pasted proprietary source code into ChatGPT while debugging, inadvertently sending confidential intellectual property to OpenAI's servers before any data processing agreement existed. Samsung discovered the breach internally and banned external AI tools on company devices within weeks. Once data enters a public AI model, the organisation has lost control of it.
How to verify factual claims before acting on them or sharing them. Output verification is the single most undertrained AI skill in 2026.
Embedded workplace AI — Microsoft 365 Copilot, Google Workspace AI, Notion AI

Unlike standalone chat tools, these are woven directly into the applications employees already use. Many employees are using these features without fully registering that they are interacting with AI at all. A meeting summary appears in Teams. A reply suggestion appears in Gmail. A document outline appears in Notion. The interaction feels like a standard product feature, not an AI output requiring scrutiny.

Tool Category 02
Embedded workplace AI
Microsoft 365 Copilot · Google Workspace AI · Notion AI
What employees need to understand
AI-generated summaries can miss nuance or introduce errors. A document summary that omits a key caveat or a draft email that misrepresents a commitment can cause downstream problems that feel inexplicable until someone traces them back to an unreviewed AI output.
How to review and edit AI-generated content rather than publishing it unchecked. The speed benefit of AI drafts is real. Treating them as finished products is a risk.
The data privacy implications of using AI features within company-licensed tools versus personal accounts. These are often meaningfully different, and most employees have not been told how.
When these tools are helpful versus when they produce plausible-sounding noise. Routine email drafts and standard document structures are low-risk. Sensitive communications, contractual language, and compliance-related content are not.
Meeting and communication AI — Otter.ai, Fireflies, Teams transcription, Zoom AI Companion

Meeting transcription and summarisation tools are now standard in most enterprise environments. They record, transcribe, and summarise conversations, often automatically, sometimes without all participants being aware. The legal and reputational exposure here is distinct from other AI categories: it involves data belonging to other people, recorded without explicit consent, processed by third-party systems with their own data retention practices.

Tool Category 03
Meeting and communication AI
Otter.ai · Fireflies · Microsoft Teams transcription · Zoom AI Companion
What employees need to understand
The accuracy limitations of AI transcription. Independent research has found word error rates of 10 to 25% in real meeting conditions, rising significantly for non-native speakers and domain-specific terminology. A distributed summary containing errors about who said what or what was decided can cause more damage than no summary at all.
Consent and notification obligations around recording, particularly across jurisdictions. What is legally required in one country may not be sufficient in another — and this is an area where the employee, not just the IT team, carries responsibility.
How to review AI meeting summaries before distributing them, and what typically gets lost: the tone of a discussion, the context behind a decision, the reservations someone expressed that did not make it into the action items.
The data retention and confidentiality implications of third-party transcription tools. Where does the recording go? Who has access to it? How long is it stored? These questions should have organisational answers, and employees should know them.
AI in data and analytics tools — Excel Copilot, Power BI, Google Looker

For non-technical employees who work with data, AI is increasingly surfacing insights, generating formulas, and producing charts automatically. This is where the stakes of uncritical AI use are highest. A wrong number presented confidently in a board report or client proposal is a significant organisational risk — and the confidence with which AI presents numbers makes them harder to question, not easier.

Meta's Galactica model, developed to assist with scientific research, produced outputs that were coherent, plausible, and wrong. In testing, it fabricated citations, generated confident-sounding but fictitious scientific claims, and produced text that passed casual review as authoritative. It was taken offline within three days of public release. The lesson applies to every AI tool handling analytical or factual content: the more sophisticated the presentation, the less likely an uncritical reviewer is to question it.

Tool Category 04
AI in data and analytics tools
Excel Copilot · Power BI · Google Looker
What employees need to understand
How to sense-check AI-generated calculations and charts against raw data. This requires knowing what the answer should roughly look like before asking the AI — which is a domain expertise question, not a technology question.
The difference between correlation and causation in AI-generated insights. AI finds patterns. It does not explain why they exist. Acting on a correlation as though it were a causal finding is one of the most common and costly misapplications of AI-assisted analysis.
How to communicate uncertainty when presenting AI-assisted analysis. "The model suggests X" and "X is the case" are different statements — and the distinction matters in regulated industries where decisions need to be traceable.
Basic data hygiene. What goes into these tools and what should not — both for data quality reasons and data governance ones.
Role-specific AI tools — by function

Beyond the universal tool categories above, employees in specific functions encounter AI embedded in the products their roles depend on daily. This is where training stakes are often highest, because the AI is operating in domain-specific contexts where errors carry professional, legal, or reputational consequences, and where the employee may be the only person in a position to catch them.

Research from 2026 shows that managers are already using AI for strategic tasks including data analysis, talent decisions, and priority management, while most individual contributors are still at the drafting-and-checking stage. That gap will narrow as tools become more embedded in role-specific platforms — which makes role-specific training urgent, not optional.

Role-Specific Tool Exposure

HR teams using AI in applicant tracking systems such as Workday and Greenhouse need to understand bias risks in AI-assisted candidate screening. Amazon's internal hiring tool, trained on historical resumes, systematically downranked candidates who attended women's colleges or listed women's organisations on their profile — without any explicit instruction to do so. It was scrapped after the pattern was identified internally. The same risk exists in any tool trained on historical hiring data, and it requires human review at the screening stage, not just at the offer stage.

Marketing teams using tools like Jasper or HubSpot AI need to understand what happens when AI-generated content goes out without adequate review. In 2023, CNET published dozens of AI-generated financial explainers containing factual errors that passed through unchecked because the output looked polished and authoritative. CNET issued public corrections, the press coverage was extensive, and the reputational damage outlasted the corrections by months. AI tools have no mechanism for flagging when a claim they generate is unverifiable. That is the reviewer's job, and it only gets done if the reviewer has been trained to do it.

Customer service teams using AI response suggestions and chatbots need to know when to override and escalate. In 2024, Air Canada's chatbot gave a passenger incorrect information about bereavement fares, telling him he could apply for a discounted rate retrospectively when no such policy existed. He booked flights in reliance on that advice. Air Canada argued the chatbot was a separate legal entity responsible for its own statements. The BC Civil Resolution Tribunal rejected this outright and ordered Air Canada to pay the passenger CAD $812 in compensation plus additional expenses, establishing that organisations are fully responsible for everything their AI publishes regardless of whether a human reviewed it.

Finance teams using Excel Copilot or Power BI AI features need to understand model assumptions and verification requirements before any AI-assisted figures reach a board pack or a regulatory submission. A number that arrives formatted as an AI-generated chart carries the same verification obligation as a number in a raw spreadsheet — and is considerably easier to present without scrutiny. For the full accountability framework across all of these functions, AI content accountability: how to manage risk, quality, and ownership covers it in detail.

Section 03

The AI Skills Employees Need — Regardless of Which Tools They Use

Tools change every few months. The AI skills employees need in 2026 and beyond are the ones that compound over years. A team trained in these four skills will adapt faster to new tools, catch more errors, and make more accountable decisions regardless of what the AI landscape looks like in twelve months.

01
Prompt literacy
The practical ability to give AI clear, specific instructions and recognise when an output does not match what was asked. In 2026, writing a useful prompt is as essential a workplace skill as writing a good email. The focus should be on specificity: what outcome do you want, in what format, with what constraints, for what audience.
02
Output verification
The habit of checking whether what AI produced is accurate, complete, and appropriate before using it. Not a technical skill but a professional one, and the skill with the highest organisational cost when absent. Output verification is the most critical and most undertrained AI skill in 2026. France's Lucie AI chatbot confidently informed users that cows lay eggs. Amusing when the subject is obvious. Far less so when the employee has no independent knowledge of the domain and the output is going into a client document.
03
Data hygiene
Knowing what information can and cannot go into AI tools, and why. Both a security issue and a regulatory one — it needs to be trained to the point of instinct rather than policy document. In 2023, Samsung engineers pasted proprietary source code into ChatGPT while debugging, inadvertently sending confidential intellectual property to OpenAI's servers before any data processing agreement existed. Samsung discovered the breach and banned external AI tools on company devices within weeks. The employee who pauses before pasting client data into a public AI tool is acting on a trained habit. The employee who does not pause was simply never trained.
04
Knowing when to override
Understanding which decisions should never rest on AI output alone, and having the confidence to act on that understanding. This is the individual expression of human oversight, and without it governance frameworks exist only on paper. The biggest barrier to effective override is not capability but culture. Employees who feel that questioning an AI output is disloyal to the technology their organisation has invested in will not override, even when they should. Human in the Loop: how to build AI oversight into your team's workflow covers the full framework for making this a reliable organisational behaviour.
Worth Remembering

Tool familiarity expires every time a product updates. These four skills do not. An employee who can verify outputs, protect data, write effective prompts, and know when to push back is equipped for whatever the AI landscape looks like next year — and the year after that.

Section 04

What This Looks Like by Role — A Quick Reference

The table below maps each organisational role to the AI tools they most commonly encounter and the training focus that will make the most difference. It is a starting point for programme design rather than a complete specification. The fuller treatment of how to build role-specific training is in the complete guide to AI training for employees.

Role
Tools most likely to encounter
Priority training focus
All employeesUniversal
ChatGPT, Copilot, Gemini
Output verification, data hygieneFoundation
Managers
Teams/Zoom AI, 365 Copilot, summarisation tools
Meeting summary review, override confidenceHigh stakes
Knowledge workers
Copilot, Notion AI, research assistants, data tools
Prompt literacy, sense-checking analysis, source verificationMedium stakes
Customer service
AI response suggestions, chatbot tools, CRM AI
When to escalate, accountability for AI-generated responsesHigh stakes
HR
AI in ATS platforms (Workday, Greenhouse)
Bias awareness in AI screening, human sign-off requirementsHigh stakes
Marketing
Jasper, HubSpot AI, image generation tools
Fact-checking obligations, brand accountability, disclosureMedium stakes
Finance
Excel Copilot, Power BI, forecasting tools
Data verification, model assumptions, communicating uncertaintyHigh stakes
Legal / Compliance
Legal research AI, document review tools
Citation verification, hallucination detection, human authority on final outputsHigh stakes

The universal foundation row — output verification and data hygiene for all employees using general-purpose tools — is not optional padding. It is the layer that prevents the most common and costly AI errors across every function. An organisation that prioritises role-specific training without first building this foundation is optimising the advanced layer before the foundational one is in place.

The Sequencing Question

The most effective training sequences build the universal foundation first — output verification and data hygiene for everyone — then layer in role-specific capability for the functions where stakes are highest. Trying to deliver role-specific training to employees who do not yet have the foundational habits produces capability built on a fragile base.

The assessment methods for establishing where your team currently sits — before designing any training — are covered in assessing AI learning gaps in your organisation.

Frequently Asked Questions
AI Training Needs — Common Questions
Answers to the questions employees, managers, and L&D leads most commonly ask about what AI training should actually cover.
What AI skills do employees need in 2026?
The four AI skills employees need are prompt literacy, output verification, data hygiene, and knowing when to override. These skills transfer across tools and compound over time — unlike tool-specific training, which expires every time a product updates.
What AI tools are employees using at work in 2026?
Employees encounter AI across five main categories: general-purpose assistants (ChatGPT, Copilot, Gemini), embedded workplace AI (Microsoft 365 Copilot, Google Workspace AI, Notion AI), meeting and communication AI (Otter.ai, Fireflies, Teams transcription, Zoom AI Companion), AI in data and analytics tools (Excel Copilot, Power BI, Looker), and role-specific tools in HR, marketing, customer service, and finance platforms. Many employees are using the second and third categories without fully registering that they are interacting with AI.
Why is AI awareness training not enough for employees?
Awareness training tells employees that AI exists and broadly what it can do. It does not build the judgment to evaluate outputs critically, the habits to protect sensitive data, or the role-specific skills to apply AI effectively. BCG found that only 25% of employees say they have received sufficient AI training from their employer. The tools arrived. The capability to use them well did not follow automatically.
What is the most important AI skill for employees to develop?
Output verification is the most critical and most undertrained AI skill in 2026. AI systems produce incorrect information with the same confidence as accurate information. No other AI skill prevents more organisational risk — and no other skill is more consistently absent from standard AI training programmes.
Know which tools your team uses.
Build the skills to use them well.

Savia's AI literacy learning paths are designed to build the role-specific capability that turns tool access into genuine competence — for the people who actually need to change how they work, not just the ones already fluent in AI.