You can find a lot of definitions of AI literacy available online. They will roughly translate to something along the lines of: AI literacy is the ability to use AI technologies while understanding their practical, ethical, and compliance limitations.

Most organisations today are intensely focused on the first part of that definition. Understandably so — the productivity upside is real, the competitive pressure is real, and the tools are genuinely remarkable. But the second part — understanding the limitations — tends to be treated as a footnote, if it is addressed at all.

The Actual Definition
AI literacy is not just knowing how to use AI tools. It is knowing when not to, what not to put into them, and what to do with what comes out.

Let's Start With the Literacy Part

In today's professional environment, there is no real doubt that every employee should be able to use basic AI tools to enhance their workflow. The question is whether they are also aware of the risks and challenges that organisations are already experiencing — in some cases, painfully.

Take a straightforward example: you really should not be copy-pasting proprietary company code — or sensitive internal data of any kind — into ChatGPT, Claude, Gemini, or any external AI tool. This is not a theoretical concern. It has already happened at scale.

Samsung
2023
Engineers pasted proprietary source code into ChatGPT to assist with debugging and code review. In a separate incident, meeting notes containing confidential internal information were fed into the tool for summarisation. Samsung subsequently banned the use of generative AI tools on internal devices while it developed its own internal solution.
Amazon
2023
Amazon warned employees after internal data — including confidential information resembling Amazon data — appeared in ChatGPT outputs shared by users online. The warning noted that Amazon, like any company, could become a third party whose data is inadvertently shared with and stored by OpenAI's systems.
JPMorgan Chase
2023
Restricted employee use of ChatGPT entirely, citing concerns about data confidentiality and regulatory compliance. The bank joined a growing list of financial institutions that moved to block or heavily restrict external AI tools while internal governance frameworks were developed.
⚠ The Pattern Is Clear

In every case above, the employees involved were not acting maliciously. They were using the tools available to them to do their jobs better. The gap was not intent — it was knowledge. They did not know what the risk was, so they could not recognise when they were creating one.

A Memo Won't Solve This — And Here's Why

Let's be honest about something. A strict company-wide memo on AI usage can absolutely solve the most egregious problems. If employees know that pasting source code into an external AI tool is forbidden, most of them will stop doing it.

But a memo cannot address the more nuanced situations — the ones that require judgement rather than rule-following. Is it acceptable to paste an anonymised version of a customer complaint into an AI tool to help draft a response? What about a redacted contract clause? What about using AI to summarise meeting notes that reference a pending acquisition? These are questions that play out dozens of times a day across your organisation, and no memo is comprehensive enough to answer all of them.

Worth Considering

There is also a second consequence of the memo-only approach. For employees who are already anxious or uncertain about AI — and there are more of them than leadership often realises — a memo that leads with prohibition and risk can deepen that anxiety, making them less likely to engage with AI tools productively at all. That is a different kind of organisational problem.

The Google Problem, Revisited

Remember when Google became ubiquitous, and yet a significant portion of users could not find what they were looking for — not because the information was not there, but because they did not understand how keywords and search queries worked? The tool was powerful. The users were untrained. The results were frustrating.

A version of that same problem is playing out with AI tools right now. People are using Claude, Gemini, ChatGPT, and others, and they are getting mediocre outputs — and concluding that the tools are overhyped. In many cases, the tools are not the problem. The prompts are. They are expecting AI to do all the guesswork without being given the context, constraints, or specificity it needs to do its job well.

Then · Early 2000s
The Google Problem
Powerful tool, untrained users. People typed vague natural-language questions and got irrelevant results — then concluded Google wasn't that useful. The tool worked fine. The mental model was wrong.
Now · Today
The Prompting Problem
Powerful tools, untrained users. People give AI vague, context-free prompts and get generic outputs — then conclude AI is overhyped. Again, the tool often works fine. The mental model — and the input — is wrong.

AI literacy includes knowing how to prompt effectively. How to give the right context. How to verify outputs rather than accept them. How to use these tools as a thinking partner rather than an answer machine. This is a learnable skill — but it does not develop on its own.

What Real AI Literacy Looks Like

AI literacy is being able to use all of these tools — and use them well. But it is through understanding their actual limitations that you build real proficiency. Understanding that AI systems hallucinate. That they reflect the biases in their training data. That they have no memory of your previous conversation unless you provide it. That what they produce is a starting point, not a finished product. That some of what you want to put into them should never leave your organisation.

An AI-literate workforce is not one that has been told what not to do. It is one that understands enough about how these tools work to make good decisions in situations the policy document never anticipated. That is the standard worth aiming for — and it is achievable, with the right training.

The Core Principle

The organisations that will get the most out of AI are not the ones that move fastest. They are the ones whose people know how to use these tools with genuine judgement — not just enthusiasm.

Ready to build real
AI literacy across your team?

If you'd like a way to balance this technology while recognising and understanding its challenges, take a look at our AI and efficiency courses — we'd be happy to help your organisation get there.

Explore AI Literacy Courses →