AI is transforming how organisations operate, how leaders make decisions, and how teams get work done. That much is not in serious dispute. What is in dispute — or at least, what is not yet well understood across many organisations — is what it actually takes to use AI responsibly and effectively at scale.
The conversation tends to focus on the tools themselves: which model to use, how to write better prompts, how to integrate AI into existing workflows. These are worthwhile questions. But they rest on a foundation that organisations often take for granted: the ability of their people to work competently with data, to interpret outputs critically, and to catch errors before they become problems. That foundation is digital literacy. And without it, AI does not make organisations more capable. It makes their mistakes faster and harder to trace.
Effective leadership in this environment now requires two distinct but connected forms of literacy. The first is AI literacy — understanding how AI systems work, what their limitations are, and how to use them with appropriate judgment. We have covered this in depth in our article devoted to AI literacy. The second is foundational digital literacy — the practical skills that allow people to work with data, validate outputs, and make informed decisions about what they are seeing. Both matter. Neither is sufficient on its own. And the relationship between them is the subject of this article.
Human-in-the-Loop Is Only as Good as the Human
Most serious discussions of AI governance eventually arrive at the same principle: Human-in-the-Loop, or HITL. The idea is that a qualified human should be involved in reviewing, validating, or approving AI outputs before they are used in consequential decisions. It is built into the EU AI Act for high-risk AI systems. It is standard practice guidance across most responsible AI frameworks. And it is, genuinely, the most reliable check on AI error, bias, and hallucination that organisations currently have available.
There is, however, an assumption embedded in this principle that often goes unexamined. HITL assumes the human is capable of performing the oversight function being assigned to them. A person reviewing an AI-generated data analysis needs to be able to read that analysis, understand what it is claiming, and identify where something might be wrong. A person approving an AI-assisted recommendation needs to understand the data it was based on. A person validating a model output needs to know what a plausible output looks like and what an anomalous one looks like.
This is especially important when it comes to AI bias — one of the most persistent and underappreciated risks in deployed AI systems. AI bias occurs when a model produces outputs that systematically favour or disadvantage particular groups, outcomes, or variables, usually because the data it was trained on reflected existing imbalances in the world. A hiring tool trained predominantly on historical data from a male-dominated industry will, without correction, tend to score male candidates more favourably. A credit model trained on data from economically privileged zip codes will underserve applicants from lower-income areas. A customer service AI trained on English-language interactions may perform significantly worse for non-native speakers. None of these outcomes require anyone to have intended them. They emerge quietly from the data — and they go undetected unless the people reviewing the outputs have enough literacy to notice that something is systematically off.
This is why digital literacy is not a nice-to-have alongside AI literacy. It is the prerequisite that makes AI literacy functional. You cannot evaluate what you do not understand. And you cannot catch what you were never trained to look for.
What Digital Literacy Actually Means in Practice
Digital literacy, in this context, means practical proficiency in the tools and methods used to work with data and information: spreadsheets, databases, SQL, data visualisation platforms, workflow tools, and collaboration software. It is not about being a developer or a data scientist. It is about having enough fluency to interact meaningfully with data — to ask it questions, to check its answers, and to know when something does not add up.
Across an organisation, this kind of competency creates value in ways that extend well beyond AI oversight. Consider what it enables at each level.
When It Goes Wrong: A Case Study in Basic Errors
It is tempting to assume that high-stakes data errors are the result of sophisticated technical failures. Sometimes they are. But often they are not — and some of the most consequential data mistakes in recent history have had embarrassingly simple causes.
During the height of the COVID-19 pandemic, England's contact tracing system lost almost 16,000 confirmed coronavirus cases over a period of days due to a data error. The cause was not a model failure or a cyberattack. It was a spreadsheet hitting its maximum row limit. Public Health England was using an outdated Excel file format that could only handle approximately 65,000 rows of data. When the file exceeded that limit, it simply stopped recording new entries — silently, without an error message, without an alert. The Guardian reported that as a result, an unknown number of contacts of positive cases were never traced and never notified — during a period when contact tracing was one of the primary mechanisms for slowing transmission.
Nearly 16,000 cases were missed. The fix, once identified, was straightforward: use a different file format. The cost of not knowing that Excel had a row limit — in a public health context, during a pandemic — was not a technical footnote. It was a national incident. And it was entirely preventable with basic digital literacy training for the people managing that data pipeline.
This example matters not because it involves AI, but precisely because it does not. If a simple spreadsheet limitation can cause that scale of harm when nobody in the process knew to look for it, consider what happens when you add AI-generated outputs into workflows managed by teams with the same gaps. The errors become more sophisticated. The outputs become harder to verify. And the consequences scale accordingly. The Excel row limit is almost a joke — right up until it is a crisis.
Building on Digital Literacy: Where AI Literacy Fits In
Once a team has genuine digital literacy as a foundation, AI literacy becomes a powerful multiplier. We have explored what AI literacy actually means in a separate piece, but the short version for this context is this: AI literacy allows skilled people to become significantly faster and more capable in their domain, without losing the critical judgment they need to evaluate what the tools are producing.
This is particularly true for subject matter experts. A data analyst who deeply understands SQL and can validate model outputs is in a completely different position using AI tools than one who cannot. A financial professional who understands the data behind an AI-generated forecast can engage with it critically, push back on anomalies, and take accountability for the decisions informed by it. The expertise is what makes the AI genuinely useful — and the digital literacy is what keeps it honest.
A Practical Roadmap for Leaders
None of this requires an overnight transformation of your organisation's capabilities. What it requires is intentional, sequenced investment in the skills that make AI use genuinely safe and effective — rather than the appearance of AI adoption without the foundation to support it.
This roadmap is not a one-time project. Digital literacy and AI literacy are both moving targets — the tools evolve, the regulatory environment shifts, and new team members arrive without the context their colleagues have built up. The organisations that do this well treat literacy development as an ongoing operational discipline, not a training event they completed in 2024.
We have structured courses covering both foundational data literacy — Excel, SQL, data visualisation, workflow tools — and AI literacy for teams and leaders. If you need something tailored to your organisation's specific context, we can help build that too.