98% of organisations have employees using unsanctioned AI tools. People paste customer data into ChatGPT, upload code to AI assistants, and share confidential documents with free AI services. In most cases, none of this is malicious. Employees are using tools that help them work faster, meet deadlines, and produce better outputs because the organisation has adopted AI as a strategic priority without providing the training, policy, or approved alternatives that would let them do so safely. If a more strategic training approach had come first, most of this wouldn't be a problem.
The perception gap is striking. 72% of organisations believe they have full visibility into AI usage. 65% of the same organisations report detecting unauthorised shadow AI. What is that gap? It's a training and governance issue. And if you don't know what your employees are using, you can't do anything about it.
Put simply, you need a full inventory of what AI tools your employees are using. Even if you don't take away anything else from this article, remember that being able to prepare for challenges makes you a lot better prepared than if you don't. And — unsurprisingly — the same is true for shadow AI. This article explains what shadow AI actually is, why it happens, what it costs, and what your organisation should actually do about it. It builds on the broader compliance framing in GRC and the training gap.
98%
Second Talent via SQ Magazine, 2026
of organisations have employees using unsanctioned AI tools in their daily work.
72%
CultureAI via IT Security Guru, 2026
of organisations believe they have full visibility into AI usage across their workforce.
65%
CultureAI via IT Security Guru, 2026
of the same organisations report detecting unauthorised shadow AI. Both figures are true simultaneously.
Section 01
What Shadow AI Actually Is: and How It Differs From Shadow IT
Shadow AI is the use of AI tools, models, or services by employees without the knowledge, approval, or oversight of IT teams. It's the AI-era evolution of shadow IT, but with meaningfully different consequences.
The key distinction is what AI tools do with data compared to what traditional unauthorised software does. Standard shadow IT stores data in an unapproved location. Shadow AI consumes it. Public models retain inputs. If an employee pastes proprietary code or customer personal data into a public model, that data becomes part of the model's intelligence, creating an irreversible governance issue about data privacy and intellectual property. You cannot un-train a model.
Shadow AI isn't a single category, either. There are three distinct types, and each requires a different response.
Employees using personal ChatGPT, Claude, or Gemini accounts for work tasks, bypassing enterprise data controls entirely.
68% of employees have used personal accounts to access free AI tools, and 57% have used sensitive data in those interactions. This is the most visible category. Usually the easiest to address once you've seen it clearly.
AI capabilities silently switched on within tools IT has already approved — without governance teams being aware.
70% of AI interactions will happen through features embedded in existing sanctioned SaaS applications.
This is the category most organisations have the least visibility into, because the tool itself was already on the approved list. The AI layer was added quietly, often through a product update.
The emerging and most serious category.
Autonomous agents with API access that chain actions across multiple services, run continuously, and make decisions without human review. This is a fundamentally different risk from a human pasting data into a chatbot for a single interaction. It requires a fundamentally different governance response. The risk scales non-linearly with the scope of what the agent is authorised to do.
Ask yourself — honestly — which of these three types are you most confident you have under control? If the answer is "none of them," you're not alone, and you're in the same position as nearly every organisation operating in 2026. The first step is admitting that.
Section 02
Why Employees Use Unapproved AI Tools and Why Banning Doesn't Work
Understanding why shadow AI happens isn't an excuse for it. It's a prerequisite for doing anything effective about it. Organisations that treat shadow AI as a compliance problem to suppress consistently fail to address it. Those that treat it as a governance signal tend to succeed.
The reasons employees use unapproved tools are consistent across research. 50% cite faster workflows as the primary motivation. 27% say unapproved tools simply offer better functionality than what their organisation provides. Only 37% of organisations have AI governance policies in place at all — meaning the majority of employees are making their own decisions about what to use and what data to share. Not because they're reckless. Because nobody has told them otherwise.
The implication is direct. Shadow AI isn't primarily a behaviour problem. It's a supply gap. When employees can't find adequate approved alternatives, they find their own. The first instinct is usually to react and block. But employees will find workarounds. The underlying productivity pressure doesn't go away when the policy arrives. The goal isn't elimination. It's to make safe, governed AI easier to use than unsafe AI.
The Evidence That Banning Doesn't Work
One healthcare system intervention yielded an 89% reduction in unauthorised AI use combined with 32 minutes of daily time savings per clinician — not by banning shadow tools, but by replacing them with something better. The shadow AI didn't disappear because it was forbidden. It disappeared because something more useful was offered. That's the pattern that actually works, and it's the opposite of the instinct most organisations act on first.
Section 03
What Shadow AI Actually Costs: Four Categories of Risk
Shadow AI isn't a theoretical risk. The costs are measurable across four categories, and most organisations are underestimating at least two of them.
Category 01
Data exposure and GDPR liability
When an employee pastes customer data, employee records, or confidential documents into an unapproved AI tool, the organisation has lost control of that data. Under GDPR, pasting personal data into a public tool constitutes a breach — with a strict 72-hour window to notify regulators. Shadow AI incidents
increase compliance costs 25–35%, and 44% of companies have already faced compliance violations from unauthorised AI use.
Category 02
Data breach costs
IBM's global study found shadow AI added $670,000 to average breach costs. Organisations with high shadow AI usage experience breach costs averaging
$4.63 million per incident. That's not a rounding error. It's a structural cost that scales with the size of AI adoption without a corresponding governance programme.
Category 03
Audit exposure
In regulated industries, one in four 2026 compliance audits will include specific inquiries into the governance of AI tools and data handling. Organisations that can't demonstrate visibility into what tools employees use, what data is being processed, and what controls are in place will face audit findings that compound future regulatory exposure. The audit isn't the risk. The undocumented usage it uncovers is.
Category 04
Quality and accountability failures
Shadow AI isn't only a security risk. It's an output quality risk. When employees use unapproved tools to produce customer-facing or decision-influencing content, the organisation is accountable for that output with no visibility into how it was produced.
AI content accountability covers how errors propagate into contracts, customer interactions, and business processes with no audit trail and no accountability chain.
⚠ The Compounding Cost
These four categories don't sit in isolation. They compound. An undetected data exposure in category one becomes a breach in category two, becomes an audit finding in category three, becomes a compounding governance problem that undermines the accountability chain in category four. The longer shadow AI runs without intervention, the more expensive each category becomes. For the broader argument on why the adoption-training gap accumulates cost over time, see AI adoption without AI training.
Section 04
The Governance Gap That Creates Shadow AI
Shadow AI is a symptom of a governance gap, not a cause of one. Most organisations now have formal AI frameworks, policies, and oversight committees. And yet unauthorised AI usage, limited detection, and inconsistent enforcement remain widespread. The result? An illusion of control: governance exists on paper, but behaviour escapes it in practice.
Three specific gaps produce shadow AI at scale. All three are fixable. None of them are fixed by writing a better policy.
1
Supply Gap
No approved alternatives
Employees using unapproved tools are often doing so because no approved equivalent exists — or because the approved option is harder to access than the consumer alternative. Governance without provision is a policy that fails at the point of use.
2
Training Gap
No training on what the policy means in practice
38% of workers misunderstand company AI policies, leading to unintentional violations. Around 50% of employees are unaware of shadow AI risks entirely. A policy that employees can't interpret can't produce the behaviour it's meant to create. This is the same dynamic we see in GDPR compliance: awareness of the rule isn't the same as understanding what it requires.
3
Framework Gap
No distinction between tool types in governance frameworks
Just over one-third of organisations have a dedicated AI policy. Most of those that do
treat all AI tools equivalent, applying the same controls to an enterprise-licensed Microsoft 365 Copilot deployment and a personal ChatGPT account used to summarise a client meeting.
Those aren't the same risk category. The controls that work for one don't work for the other.
Here's the uncomfortable question to put to your own governance framework: if you removed the policy document tomorrow, would employee behaviour actually change? If the answer is "not really" — and for most organisations, it wouldn't — then the policy was the theatre, and the behaviour was happening despite it. That's the gap worth closing. For a worked example of how to make governance visible enough that behaviour actually tracks to it, see EU AI Act deployer obligations.
Section 05
What Your Organisation Should Actually Do: a Practical Framework
The organisations that manage shadow AI most effectively follow a consistent pattern: discover first, govern second, enable third, train throughout. Four steps — in this order, because skipping any one of them breaks the next.
1
Discover before you govern
You cannot govern what you cannot see. Before writing policy, get an honest picture of what AI tools employees are actually using, through what accounts, for what tasks, and with what data. It's worth being clear on what the actual security concern is here. It's not the AI part of shadow AI that should concern security leaders. It's the data being provided to that AI by the employee. Anonymous surveys combined with network traffic analysis and SaaS discovery tooling give a more complete picture than either approach alone, and often surfaces categories of usage that policy authors hadn't anticipated.
2
Classify risk, not just tools
Not all shadow AI carries the same risk. An employee using an unapproved AI writing tool to draft internal meeting notes is a different risk category from an employee using a public AI tool to analyse customer financial data. Effective shadow AI governance requires risk-based classification: controls that enable secure AI use without slowing teams down, rather than blanket restrictions employees will simply route around. The agentic shadow AI category described in section one warrants the most urgent attention, because the risk scales non-linearly with what the agent is authorised to do.
3
Provide approved alternatives that are actually better
This is the step most governance frameworks skip. Companies with clear AI policies and accessible approved alternatives see 67% less shadow AI usage. The policy matters. But the approved alternative matters more. It needs to be genuinely accessible, genuinely functional, and genuinely better for the workflows employees have been using shadow tools to support. A policy that prohibits something without replacing it is a policy that will be ignored.
4
Train on specifics, not principles
Role-specific training scenarios produce the behaviour change that generic awareness content does not. The training content that addresses shadow AI isn't primarily about security awareness. It's about data classification, approved use cases, and the practical difference between enterprise and consumer AI tools.
Employees need to know what they can use, what they cannot, and what to do when they're unsure. When that's clear, consistent, and supported visibly by leadership, it's more effective at sustaining compliance than technical controls alone.
There's a deeper design principle underneath these four steps. Awareness vs capability training is the difference between telling employees not to paste sensitive data into ChatGPT and teaching them what the data classifications in their specific function are, which approved tools handle each class, and what to do when the boundary is unclear. The former satisfies a requirement. The latter changes what people actually do.
Section 06
The One Thing Most Shadow AI Responses Get Wrong
Most organisations respond to shadow AI by writing a policy and issuing a communication. Policies set intent. Without real-time enforcement at the point of use and training that gives employees a practical framework for decisions, risk is created quietly and at scale.
The most common failure is treating shadow AI as an IT and security problem with a communication solution — rather than a training and enablement problem with a governance overlay. Employees who understand why certain tools create risk, what approved alternatives exist, and how to use them safely behave differently from employees who have simply been told not to use something without understanding why.
Shadow AI, like shadow IT before it, cannot be fully avoided. It has to be managed. The organisations that manage it best have built cultures of informed use rather than fear of enforcement. That's a training outcome, not a policy outcome. And it's what separates the organisations that will handle AI governance well in 2026 from the ones that will be issuing communications about it indefinitely.
The Standard Worth Aiming For
The goal isn't a shadow-AI-free organisation. That doesn't exist in 2026. The goal is visibility, risk-based classification, credible approved alternatives, and training that gives employees the judgment to choose them. The organisations that build this don't eliminate shadow AI. They make it unnecessary.
Shadow AI Governance — Readiness Checklist
You have run a discovery exercise combining anonymous employee surveys, network traffic analysis, and SaaS discovery tooling — not just a policy announcement.
Your governance framework distinguishes between the three types of shadow AI (consumer tools, embedded SaaS features, agentic) — not a single blanket policy.
Approved alternatives exist and are genuinely better than the consumer tools they replace — accessible, functional, and fit for the workflows employees were using shadow tools to support.
Training addresses specifics, not principles — data classification in each function, approved use cases, escalation pathways when the situation is unclear.
Employees can share what they use without fear of reprisal — because that's how the shadow AI your discovery phase missed becomes visible.
Leadership supports the approved tools visibly — because leadership adoption is the fastest path from policy on paper to behaviour in practice.
Frequently Asked Questions
Shadow AI — Common Questions
Answers to the questions CISOs, compliance leads, and L&D directors most commonly ask when building a practical shadow AI response.
What is shadow AI?
Shadow AI is the use of AI tools, models, or services by employees without the knowledge, approval, or oversight of IT teams. It's the AI-era evolution of shadow IT, but with meaningfully different consequences. Standard shadow IT stores data in an unapproved location. Shadow AI consumes it — public models often retain training inputs, meaning pasted customer data or proprietary code can become part of the model's intelligence. You cannot un-train a model. Shadow AI comes in three distinct types: consumer AI tools used for work, embedded AI features silently switched on in approved SaaS tools, and agentic shadow AI where autonomous agents chain actions across services without human review.
Why do employees use unapproved AI tools?
50% of employees cite faster workflows as the primary motivation. 27% say unapproved tools simply offer better functionality than what their organisation provides. Only 37% of organisations have AI governance policies in place at all — meaning most employees are making their own decisions about what to use and what data to share. Not because they're reckless. Because nobody has told them otherwise. Shadow AI isn't primarily a behaviour problem. It's a supply gap.
What does shadow AI actually cost an organisation?
Costs distribute across four categories. Shadow AI incidents increase legal and compliance costs by 25 to 35%, and 44% of companies have faced compliance violations from unauthorised AI use. IBM found shadow AI added $670,000 to average breach costs, with organisations reporting average breach costs of $4.63 million per incident. One in four 2026 compliance audits will include specific inquiries into AI tool governance. And the organisation remains accountable for AI-assisted outputs regardless of whether the tool was sanctioned — see
AI content accountability for the detail.
Does banning AI tools reduce shadow AI?
No. Organisations that treat shadow AI as a compliance problem to suppress consistently fail to address it. Employees will find workarounds because the underlying productivity pressure doesn't go away. The goal isn't to eliminate AI usage — it's to make safe, governed AI easier to use than unsafe AI. One healthcare system intervention yielded an 89% reduction in unauthorised AI use combined with 32 minutes of daily time savings per clinician, not by banning shadow tools but by replacing them with something better. Companies with clear policies and accessible approved alternatives see 67% less shadow AI usage.
What is the right framework for addressing shadow AI?
Discover first, govern second, enable third, train throughout. Step one is honest discovery combining anonymous surveys, network analysis, and SaaS discovery. Step two is risk-based classification rather than blanket restrictions. Step three is providing approved alternatives that are genuinely better than the shadow options. Step four is training that covers specifics, not principles: which tools employees can use, what they cannot, and what to do when they're unsure.
The training is what turns the policy from a document into a practice. The full role framework is in
AI Literacy by Role.
Shadow AI is a training and governance problem
before it's a technology problem.
Savia's AI literacy programmes include specific content on approved tool use, data handling in AI contexts, and the practical distinctions employees need to make safe decisions — building the informed workforce that good shadow AI governance requires.