AI is transforming how organisations operate, how leaders make decisions, and how teams get work done. That much is not in serious dispute. What is in dispute — or at least, what is not yet well understood across many organisations — is what it actually takes to use AI responsibly and effectively at scale.

The conversation tends to focus on the tools themselves: which model to use, how to write better prompts, how to integrate AI into existing workflows. These are worthwhile questions. But they rest on a foundation that organisations often take for granted: the ability of their people to work competently with data, to interpret outputs critically, and to catch errors before they become problems. That foundation is digital literacy. And without it, AI does not make organisations more capable. It makes their mistakes faster and harder to trace.

Effective leadership in this environment now requires two distinct but connected forms of literacy. The first is AI literacy — understanding how AI systems work, what their limitations are, and how to use them with appropriate judgment. We have covered this in depth in our article devoted to AI literacy. The second is foundational digital literacy — the practical skills that allow people to work with data, validate outputs, and make informed decisions about what they are seeing. Both matter. Neither is sufficient on its own. And the relationship between them is the subject of this article.

Human-in-the-Loop Is Only as Good as the Human

Most serious discussions of AI governance eventually arrive at the same principle: Human-in-the-Loop, or HITL. The idea is that a qualified human should be involved in reviewing, validating, or approving AI outputs before they are used in consequential decisions. It is built into the EU AI Act for high-risk AI systems. It is standard practice guidance across most responsible AI frameworks. And it is, genuinely, the most reliable check on AI error, bias, and hallucination that organisations currently have available.

There is, however, an assumption embedded in this principle that often goes unexamined. HITL assumes the human is capable of performing the oversight function being assigned to them. A person reviewing an AI-generated data analysis needs to be able to read that analysis, understand what it is claiming, and identify where something might be wrong. A person approving an AI-assisted recommendation needs to understand the data it was based on. A person validating a model output needs to know what a plausible output looks like and what an anomalous one looks like.

This is especially important when it comes to AI bias — one of the most persistent and underappreciated risks in deployed AI systems. AI bias occurs when a model produces outputs that systematically favour or disadvantage particular groups, outcomes, or variables, usually because the data it was trained on reflected existing imbalances in the world. A hiring tool trained predominantly on historical data from a male-dominated industry will, without correction, tend to score male candidates more favourably. A credit model trained on data from economically privileged zip codes will underserve applicants from lower-income areas. A customer service AI trained on English-language interactions may perform significantly worse for non-native speakers. None of these outcomes require anyone to have intended them. They emerge quietly from the data — and they go undetected unless the people reviewing the outputs have enough literacy to notice that something is systematically off.

The Uncomfortable Reality

A HITL control is only as effective as the digital literacy of the person in the loop. Put an underskilled reviewer in that position and you do not have a safety mechanism — you have a rubber stamp with a human face on it. AI bias, in particular, is rarely obvious on a case-by-case basis. It shows up in patterns — and catching patterns requires people who know how to look for them.

This is why digital literacy is not a nice-to-have alongside AI literacy. It is the prerequisite that makes AI literacy functional. You cannot evaluate what you do not understand. And you cannot catch what you were never trained to look for.

What Digital Literacy Actually Means in Practice

Digital literacy, in this context, means practical proficiency in the tools and methods used to work with data and information: spreadsheets, databases, SQL, data visualisation platforms, workflow tools, and collaboration software. It is not about being a developer or a data scientist. It is about having enough fluency to interact meaningfully with data — to ask it questions, to check its answers, and to know when something does not add up.

Across an organisation, this kind of competency creates value in ways that extend well beyond AI oversight. Consider what it enables at each level.

Data Competency
Employees who understand spreadsheets and databases can interrogate AI outputs directly — checking source data, running their own calculations, and validating claims rather than accepting them at face value.
Visualisation and Communication
Translating AI outputs into clear, accurate visuals for stakeholders is a distinct skill. A digitally literate team can turn model outputs into meaningful insight — rather than impressive-looking charts that nobody can actually interpret.
Operational Efficiency
Teams that understand their own workflows can integrate AI tools into them meaningfully — identifying where automation genuinely helps, and where it introduces more complexity than it removes.
Error Detection and Risk Mitigation
Anomalies in data outputs are much more likely to be caught before they escalate when the people reviewing them have enough quantitative fluency to notice that something looks wrong.
Innovation and Experimentation
Digitally literate teams can experiment with new tools more safely — because they understand enough about the underlying data to know when an experiment is working and when it is not.
Cross-Functional Collaboration
A shared baseline of digital competency means that finance, operations, marketing, and product teams can actually understand each other's data — which is the foundation of effective governance and coordination.

When It Goes Wrong: A Case Study in Basic Errors

It is tempting to assume that high-stakes data errors are the result of sophisticated technical failures. Sometimes they are. But often they are not — and some of the most consequential data mistakes in recent history have had embarrassingly simple causes.

During the height of the COVID-19 pandemic, England's contact tracing system lost almost 16,000 confirmed coronavirus cases over a period of days due to a data error. The cause was not a model failure or a cyberattack. It was a spreadsheet hitting its maximum row limit. Public Health England was using an outdated Excel file format that could only handle approximately 65,000 rows of data. When the file exceeded that limit, it simply stopped recording new entries — silently, without an error message, without an alert. The Guardian reported that as a result, an unknown number of contacts of positive cases were never traced and never notified — during a period when contact tracing was one of the primary mechanisms for slowing transmission.

⚠ The Scale of the Error

Nearly 16,000 cases were missed. The fix, once identified, was straightforward: use a different file format. The cost of not knowing that Excel had a row limit — in a public health context, during a pandemic — was not a technical footnote. It was a national incident. And it was entirely preventable with basic digital literacy training for the people managing that data pipeline.

This example matters not because it involves AI, but precisely because it does not. If a simple spreadsheet limitation can cause that scale of harm when nobody in the process knew to look for it, consider what happens when you add AI-generated outputs into workflows managed by teams with the same gaps. The errors become more sophisticated. The outputs become harder to verify. And the consequences scale accordingly. The Excel row limit is almost a joke — right up until it is a crisis.

Building on Digital Literacy: Where AI Literacy Fits In

Once a team has genuine digital literacy as a foundation, AI literacy becomes a powerful multiplier. We have explored what AI literacy actually means in a separate piece, but the short version for this context is this: AI literacy allows skilled people to become significantly faster and more capable in their domain, without losing the critical judgment they need to evaluate what the tools are producing.

This is particularly true for subject matter experts. A data analyst who deeply understands SQL and can validate model outputs is in a completely different position using AI tools than one who cannot. A financial professional who understands the data behind an AI-generated forecast can engage with it critically, push back on anomalies, and take accountability for the decisions informed by it. The expertise is what makes the AI genuinely useful — and the digital literacy is what keeps it honest.

What AI Literacy Adds — for Digitally Literate Teams
Interpret AI outputs accurately and avoid over-reliance — knowing when to trust a model and when to dig deeper is a skill in itself, built on a foundation of understanding what the model is actually doing.
Guide human and AI workflows effectively — designing processes where the human adds the most value and the AI handles what it handles well, rather than just bolting AI onto existing workflows.
Evaluate AI tools and model quality — asking the right questions when a new tool is proposed: What was it trained on? What are its known limitations? How does it perform in edge cases relevant to our context?
Anticipate operational disruptions — understanding enough about how models work to foresee where they might degrade, drift, or fail as real-world conditions change.
Build transparency and trust with stakeholders — being able to explain, in plain terms, how an AI-informed decision was made and what safeguards were in place. Stakeholders, regulators, and clients increasingly expect this.

A Practical Roadmap for Leaders

None of this requires an overnight transformation of your organisation's capabilities. What it requires is intentional, sequenced investment in the skills that make AI use genuinely safe and effective — rather than the appearance of AI adoption without the foundation to support it.

The Savia Framework
Five Steps to a Digitally and AI-Literate Organisation
In sequence — because each step prepares the ground for the next.
Step 01
Audit digital and AI literacy across your teams
Before you invest in training, understand what you are actually working with. This does not need to be a formal assessment — a structured conversation with team leads about where people struggle with data, tools, and AI outputs will surface the most significant gaps quickly. The goal is a clear, honest picture of where you are starting from, not where you hope you are.
Step 02
Provide structured training in core digital and AI skills
Address foundational digital skills first — data analysis, Excel and SQL proficiency, data visualisation, workflow tools — and build AI literacy on top of that foundation. Generic training rarely works here. The most effective programs are role-relevant and scenario-based, focused on the specific data challenges your teams actually face day to day.
Step 03
Implement monitoring and validation for AI outputs
Define where AI outputs need to be reviewed before use, and by whom. Build this into your workflows explicitly — not as a suggestion, but as a documented step. Track error rates over time so you can see whether model performance is stable or drifting. This is your early warning system.
Step 04
Define accountability frameworks for AI-informed decisions
Establish clearly who is responsible for decisions that are informed, assisted, or generated by AI. Accountability cannot be distributed across a model and a workflow and nobody in particular. When something goes wrong — and at some point, it will — you need a clear answer to the question of who was responsible for catching it.
Step 05
Communicate openly about risks, limitations, and safeguards
With your team, with your stakeholders, and with your clients where relevant. Transparency about how AI is being used and what controls are in place is increasingly expected by regulators and clients alike. Organisations that communicate proactively about their AI governance build trust. Those that wait until something goes wrong to explain their processes do not.

This roadmap is not a one-time project. Digital literacy and AI literacy are both moving targets — the tools evolve, the regulatory environment shifts, and new team members arrive without the context their colleagues have built up. The organisations that do this well treat literacy development as an ongoing operational discipline, not a training event they completed in 2024.

Ready to build a
digitally literate, AI-ready team?

We have structured courses covering both foundational data literacy — Excel, SQL, data visualisation, workflow tools — and AI literacy for teams and leaders. If you need something tailored to your organisation's specific context, we can help build that too.