The traditional instructional design process is a bit like building a cathedral. Rigorous, thorough, and when done well producing something that will serve people reliably for a long time. The problem is that building a cathedral takes a while — and in 2026, trying to apply that same process to AI training is like laying foundations while the ground shifts six inches to the left every Tuesday.

This is not an argument against instructional design. The six-stage development cycle that most L&D professionals work within exists for good reasons, and for a significant proportion of training needs it remains the right approach. The argument is more specific: the method needs to match the shelf life of the content. For topics that change slowly, a rigorous production cycle is an investment. For topics that change faster than the production cycle itself, the same rigour becomes a liability.

Understanding where that line falls — and building your programme accordingly — is one of the most consequential decisions an L&D team can make when approaching AI literacy. We explored the broader challenge of building that programme here. This article goes deeper on a specific tension at its centre: which parts of AI training genuinely benefit from a full production cycle, and which parts need a different approach entirely.

The Six-Stage Cycle: What It Is and What It Was Built For

Before examining where the cycle breaks down, it is worth being precise about what it actually involves. The six stages are well-established in the profession, and each one exists to solve a specific problem in the content development process.

#
Stage
What it is actually solving
01
SME BriefingDefine knowledge, skills, and learning objectives
Ensures the content is grounded in what subject matter experts actually know, not what the instructional designer assumes they know
02
StoryboardTranslate input into a screen-by-screen interaction map
Creates a shared reference point for review before expensive build work begins — catching structural problems cheaply
03
First Draft ReviewStakeholders validate accuracy, tone, and goals
Distributes accountability for content accuracy across the people who will be held responsible for what the course says
04
Course BuildDevelop visuals, interactions, and multimedia
Converts approved content into a learning experience — the stage where production investment is concentrated
05
QA & Sign-offFunctional testing and final content validation
Catches errors before deployment — technical and factual — when correction is still relatively inexpensive
06
Deploy & IteratePublish to the LMS and schedule reviews
Gets the content to learners and establishes the cadence for keeping it current over time

The cycle is not arbitrary — it is designed to distribute risk across the production process, catching problems at the cheapest possible point before they become expensive ones. The question is not whether these stages are valuable. It is whether the timeline they require is compatible with the rate at which the subject matter changes.

Where the Cycle Still Earns Its Place

There is a category of training content for which the full production cycle remains the right choice, and it is defined by one characteristic: the underlying truth being taught does not change on a quarterly basis. For these topics, the investment in visual polish, rigorous review, and comprehensive sign-off pays dividends over years of deployment rather than becoming obsolete before the course launches.

Works Well
Compliance and legal foundations
Foundational legal topics — even those touching on AI — are relatively stable reference points. The EU AI Act and GDPR are dense, complex frameworks, but once enacted they change slowly. A module on the legal definition of bias liability, or on the documentation requirements for high-risk AI systems, built through a full production cycle in 2025 will still be substantially accurate in 2027. The investment is justified because the shelf life is long enough to recover it.
Works Well
Role-specific culture and soft skills
Training focused on how a specific organisation handles performance reviews, conflict resolution, or leadership development benefits from the rigour of a full cycle. These topics are stable by definition — they describe how your organisation operates, not how a third-party model performs. The investment in high-quality visuals and complex interactions pays off across years of use for onboarding and management development programmes.

Both cases share the same underlying logic. The cycle is appropriate when the content has a long shelf life relative to the production time. When a module will be accurate, relevant, and deployable for two or three years, spending three months building it properly is a sound investment. The problem arises when that ratio inverts.

Where the Cycle Becomes the Problem

AI literacy sits in a fundamentally different category from compliance law or soft skills development. The capabilities of AI systems — what they can do, how they behave, what their limits are, what constitutes a well-formed prompt — change faster than the production timeline of a traditional L&D course. This is not a minor scheduling inconvenience. It is a structural mismatch that can make a carefully produced training course actively misleading by the time it reaches learners.

⚠ The Production-Stability Mismatch

A module built in January detailing the context window limits of a specific AI model passes through SME briefing, storyboard, stakeholder review, build, and QA. By the time it clears sign-off in March, a new model iteration has been released with ten times the capacity. The course launches factually incorrect. Every learner who completes it now has a more confident but less accurate understanding of the tool they are using than they had before. Rigour, in this case, has made things worse.

The context window example is representative of a broader pattern. AI tool interfaces are redesigned. Model capabilities expand. Prompt strategies that were effective in early 2024 have been rendered obsolete by models that are more agentic and require less explicit instruction. The shelf life of specific prompting techniques has shortened considerably as models have developed — what was a sophisticated, reliable prompt structure two years ago may now produce worse results than a much simpler instruction, because the model has become better at inferring intent without scaffolding.

The gap between AI adoption and meaningful AI use makes this concrete. When 88% of organisations are using AI but only 5% of employees are using it in ways that transform their work, part of the explanation is that training has focused on how the tools work today rather than how to think about them as they continue to evolve. A course that teaches specific techniques for a specific model version is not building AI literacy. It is building familiarity with a snapshot — and the snapshot expires.

The Shelf Life Problem, Visualised

The core issue is not that AI training content becomes wrong quickly. It is that different types of AI-adjacent content have radically different shelf lives — and a production process that treats them uniformly is optimised for the wrong ones. Consider the rough decay curve across a few representative content types.

Estimated content shelf life — AI-related training topics
GDPR / EU AI Act foundations
2 – 4 years
AI ethics principles
12 – 18 months
Model capabilities overview
3 – 6 months
Specific prompting techniques
4 – 8 weeks

A production cycle that takes three months is well matched to content at the top of that chart. It is entirely mismatched to content at the bottom. The implication is not that the bottom categories should not be trained — they absolutely should, and they are often where the most operationally relevant gaps sit. It is that they need a different production model: shorter, faster, modular, and built with the explicit expectation that they will be updated or replaced on a short cycle.

What This Means for How You Build Your Programme

The practical implication is a tiered content strategy — not a single production standard applied uniformly, but a deliberate match between content type and production approach. Long-shelf-life content gets the full cycle: rigorous SME involvement, careful storyboarding, polished build, comprehensive sign-off. Short-shelf-life content gets something faster and lighter: a recorded walkthrough, a short scenario, a prompt library entry, a guide embedded in the tool itself.

Currency is a dimension of quality for fast-moving content. A polished module that is factually accurate is high quality. A polished module that is factually outdated is not — regardless of how well it was produced. For AI literacy training, the team that ships accurate content quickly and updates it regularly is producing higher-quality training than the team that spends six months building something beautiful that is already wrong at launch.

The Reframe

AI literacy is not a destination. It is an ongoing discipline — one that requires a training approach built for movement rather than permanence. The goal for fast-moving AI content is not visual polish. It is speed, accuracy, and the willingness to update before you feel ready to. The cathedral is a wonderful building. It just cannot keep up with Tuesday.

Need content that keeps pace
with your team's actual needs?

Our AI literacy courses are built to be current, practical, and updated as the technology evolves — not locked into a production cycle that outlasts the content.