The Situation

88% of organisations now use AI in at least one business function. Only 5% of employees use it in ways that actually transform their work. The rest are calling in Seal Team 6 to rescue a cat stuck in a tree.

The gap is not about access to tools. It is about training — and specifically, about whether the training approach being used was designed for a world where the technology moves as fast as AI does.

The Traditional L&D Approach — and Why It Still Matters

It is genuinely easy to build a training program. The traditional Learning & Development approach is straightforward and well-established: assess the current qualifications of your employees, define the desired state you want them to reach, identify the competency gap between the two, and then design the appropriate training blend to close it. Assessment. Gap analysis. Curriculum. Delivery.

The result, in theory, is a fully trained workforce — capable not only of using the tools in front of them but of understanding the limitations, the risks, and the ethical dilemmas that come with them. Clean. Logical. Measurable.

Here is the important thing to say about this model: it is not wrong. The underlying principles — understanding where your employees are, defining where you need them to be, and building toward that — are still the right way to think about workforce development. The competency gap framework remains the most useful conceptual tool available for this kind of work.

What has changed, fundamentally, is the process and the thinking required to apply it to AI literacy. And that change is significant enough to warrant a serious rethink of how these programs are built and maintained.

The Problem With AI Literacy Is That It Doesn't Stand Still

Consider a concrete example. What your employees could meaningfully do with Claude Sonnet 4.5 may be quite different from what they can do with Sonnet 4.6. That is one minor model iteration — and already the relevant skills, limitations, and best practices have shifted. Now consider the challenge of explaining the different context window limits across GPT-4o, GPT-4.5, and Claude Sonnet 4.6 in a single training module that is supposed to remain accurate for more than a few weeks.

This is not because the underlying theory is complicated. It is not. Unlike GDPR or the EU AI Act — frameworks that are dense with legal language but essentially static once enacted — AI capabilities evolve on an almost weekly basis. GDPR, for all its length and complexity, is largely a fixed reference point. Courts may interpret it differently over time. The European Commission may issue new guidance. But the text itself does not change between quarters.

AI literacy is different in kind, not just in degree. It is a continuously evolving skillset, where the technology your employees are being trained to use may look meaningfully different by the time the training has been designed, reviewed, approved, and deployed. That is a structural problem for traditional L&D timelines — and it is one that most organisations have not yet fully confronted.

The Core Challenge

With GDPR, you build a course and update it when the law changes. With AI literacy, the technology your course is about may have already moved on by the time the course is ready to launch. The production cycle and the rate of change are fundamentally mismatched.

The Savia Learning Process
How a Standard Course Gets Built
The traditional instructional design process — and where the pressure points emerge for fast-moving topics like AI.
#
Stage
What Happens
Owner
01
SME Briefing
Subject matter experts define the knowledge and skills the course must cover. Learning objectives are agreed and scope is confirmed.
SME
02
Storyboard
The instructional designer translates the SME input into a structured storyboard — screen by screen, interaction by interaction — before any build begins.
L&D
03
First Draft Review
The storyboard is shared with the client or relevant stakeholder for review. Accuracy, tone, and learning objectives are validated before development starts.
Stakeholder
04
Course Build
The approved storyboard is developed into a full course — visuals, interactions, assessments, and any multimedia elements are produced and assembled.
L&D / Dev
05
QA & Sign-off
The completed course undergoes quality assurance — functional testing, content accuracy check, and final stakeholder sign-off before deployment.
Stakeholder
06
Deploy & Iterate
The course is published to the LMS. For AI topics specifically, a scheduled review cadence is built in from the start — not treated as a one-time launch.
L&D

This process works well — and for many training programs, it remains exactly the right approach. Deep-dive onboardings, compliance foundations, role-specific skill development: these benefit from the rigour that a full storyboard-to-build cycle provides. The process is not the problem.

For AI literacy specifically, the challenge is that steps 1 through 5 can take longer than the technology remains stable. A course that accurately describes the capabilities and limitations of a given model in month one may be quietly misleading by month four. That is not a failure of instructional design. It is a structural mismatch between production timelines and the pace of AI development — and it requires deliberate adaptation, not just better processes.

The Data Behind the Gap
88%
McKinsey · 2025 Global AI Survey
of organisations now use AI in at least one business function — adoption is near-universal at the organisational level.
5%
EY · 2025 Work Reimagined Survey · 15,000 employees, 29 countries
of employees use AI in advanced ways that actually transform their work. The rest: basic search, document summaries, copy-paste assistance.

That gap — between 88% adoption and 5% meaningful use — is not explained by access. It is not explained by willingness. It is explained by training. Or rather, by the absence of it. And it points to a workforce that has been given tools without the foundation to use them well. For more on what that foundation looks like at the individual level, it is worth reading our piece on what AI literacy actually means — and why using a tool is not the same as understanding it.

Building a Program That Keeps Pace

So what does a practical, sustainable AI literacy program actually look like? The answer is not to abandon the principles of good instructional design. It is to apply them differently — with speed, iteration, and practical relevance as the primary design constraints, rather than comprehensiveness and visual polish.

01
Reinforce agile principles with your L&D team
Picture-perfect visuals are not the priority. Getting accurate, coherently structured information in front of learners is. Build a first version, deploy it, collect feedback, and iterate. A good-enough module that is current will always outperform a polished module that is outdated.
02
Use AI to build the AI training
Many L&D production processes can be accelerated with the same tools employees are being trained on. TTS for voiceover, AI-generated visuals, automated first drafts. Ironically, AI is part of the problem — so make it part of the solution. The bandwidth this frees up is the bandwidth that keeps your content current.
03
Make it practical, not theoretical
Reading ten articles about context windows or token limits is not necessary for most employees. Training needs to target on-the-job decisions. Scenario-based, skill-focused modules consistently outperform theory-heavy content on both completion rates and actual competency development.
04
Rethink the delivery model itself
If your team does not have the bandwidth for week-long onboardings, that is not a resourcing problem to push through — it is a signal to reconsider the format. Shorter, modular content delivered at the point of need is often both more feasible and more effective than comprehensive programmes delivered once.
Worth Asking

Is the real constraint your L&D team's capacity — or the assumption that AI literacy has to be delivered in a particular way? Sometimes the most useful thing is to challenge the format before redesigning the content.

Where to Start — Practically

The first step is still the one traditional L&D always recommends: understand where your employees actually are. Not where you hope they are, and not where the most optimistic interpretation of your last training completion report suggests they might be. Where they actually are. A short skills diagnostic — even an informal one — will tell you more than any assumption.

From there, the goal is not to build a program that covers everything. It is to build a program that covers the right things for the right roles, and that is designed from the outset to be updated. AI literacy is not a destination. It is an ongoing discipline — one that, like the technology itself, will need to keep moving.

Think about what your existing L&D team can genuinely deliver given their current bandwidth and constraints. Put them in the best position to do that well — with the right tools, the right briefs, and a realistic production cadence. If you have gaps, or if you do not yet have an L&D function, there is no shame in looking for the right outside support rather than trying to build everything from scratch under time pressure.

Need a head start?
We've built the content library.

If you need outside help, or if your L&D team needs a strong foundation to build from, take a look at our AI literacy training content. If we don't have exactly what you're looking for yet, we have the expertise to build and adapt it for your organisation.

Explore AI Literacy Training →