The traditional instructional design process is a bit like building a cathedral. Rigorous, thorough, and when done well producing something that will serve people reliably for a long time. The problem is that building a cathedral takes a while — and in 2026, trying to apply that same process to AI training is like laying foundations while the ground shifts six inches to the left every Tuesday.
This is not an argument against instructional design. The six-stage development cycle that most L&D professionals work within exists for good reasons, and for a significant proportion of training needs it remains the right approach. The argument is more specific: the method needs to match the shelf life of the content. For topics that change slowly, a rigorous production cycle is an investment. For topics that change faster than the production cycle itself, the same rigour becomes a liability.
Understanding where that line falls — and building your programme accordingly — is one of the most consequential decisions an L&D team can make when approaching AI literacy. We explored the broader challenge of building that programme here. This article goes deeper on a specific tension at its centre: which parts of AI training genuinely benefit from a full production cycle, and which parts need a different approach entirely.
The Six-Stage Cycle: What It Is and What It Was Built For
Before examining where the cycle breaks down, it is worth being precise about what it actually involves. The six stages are well-established in the profession, and each one exists to solve a specific problem in the content development process.
The cycle is not arbitrary — it is designed to distribute risk across the production process, catching problems at the cheapest possible point before they become expensive ones. The question is not whether these stages are valuable. It is whether the timeline they require is compatible with the rate at which the subject matter changes.
Where the Cycle Still Earns Its Place
There is a category of training content for which the full production cycle remains the right choice, and it is defined by one characteristic: the underlying truth being taught does not change on a quarterly basis. For these topics, the investment in visual polish, rigorous review, and comprehensive sign-off pays dividends over years of deployment rather than becoming obsolete before the course launches.
Both cases share the same underlying logic. The cycle is appropriate when the content has a long shelf life relative to the production time. When a module will be accurate, relevant, and deployable for two or three years, spending three months building it properly is a sound investment. The problem arises when that ratio inverts.
Where the Cycle Becomes the Problem
AI literacy sits in a fundamentally different category from compliance law or soft skills development. The capabilities of AI systems — what they can do, how they behave, what their limits are, what constitutes a well-formed prompt — change faster than the production timeline of a traditional L&D course. This is not a minor scheduling inconvenience. It is a structural mismatch that can make a carefully produced training course actively misleading by the time it reaches learners.
A module built in January detailing the context window limits of a specific AI model passes through SME briefing, storyboard, stakeholder review, build, and QA. By the time it clears sign-off in March, a new model iteration has been released with ten times the capacity. The course launches factually incorrect. Every learner who completes it now has a more confident but less accurate understanding of the tool they are using than they had before. Rigour, in this case, has made things worse.
The context window example is representative of a broader pattern. AI tool interfaces are redesigned. Model capabilities expand. Prompt strategies that were effective in early 2024 have been rendered obsolete by models that are more agentic and require less explicit instruction. The shelf life of specific prompting techniques has shortened considerably as models have developed — what was a sophisticated, reliable prompt structure two years ago may now produce worse results than a much simpler instruction, because the model has become better at inferring intent without scaffolding.
The gap between AI adoption and meaningful AI use makes this concrete. When 88% of organisations are using AI but only 5% of employees are using it in ways that transform their work, part of the explanation is that training has focused on how the tools work today rather than how to think about them as they continue to evolve. A course that teaches specific techniques for a specific model version is not building AI literacy. It is building familiarity with a snapshot — and the snapshot expires.
The Shelf Life Problem, Visualised
The core issue is not that AI training content becomes wrong quickly. It is that different types of AI-adjacent content have radically different shelf lives — and a production process that treats them uniformly is optimised for the wrong ones. Consider the rough decay curve across a few representative content types.
A production cycle that takes three months is well matched to content at the top of that chart. It is entirely mismatched to content at the bottom. The implication is not that the bottom categories should not be trained — they absolutely should, and they are often where the most operationally relevant gaps sit. It is that they need a different production model: shorter, faster, modular, and built with the explicit expectation that they will be updated or replaced on a short cycle.
What This Means for How You Build Your Programme
The practical implication is a tiered content strategy — not a single production standard applied uniformly, but a deliberate match between content type and production approach. Long-shelf-life content gets the full cycle: rigorous SME involvement, careful storyboarding, polished build, comprehensive sign-off. Short-shelf-life content gets something faster and lighter: a recorded walkthrough, a short scenario, a prompt library entry, a guide embedded in the tool itself.
Currency is a dimension of quality for fast-moving content. A polished module that is factually accurate is high quality. A polished module that is factually outdated is not — regardless of how well it was produced. For AI literacy training, the team that ships accurate content quickly and updates it regularly is producing higher-quality training than the team that spends six months building something beautiful that is already wrong at launch.
Our AI literacy courses are built to be current, practical, and updated as the technology evolves — not locked into a production cycle that outlasts the content.