Most employers running AI training programs today are doing so without a clear external benchmark to measure against. They are making reasonable decisions in the absence of a shared standard: which tools to cover, which employees to prioritise, what good looks like.

The numbers confirm the problem. 82% of enterprise leaders say their organisation provides some form of AI training in 2026 — and yet 59% still report an AI skills gap. Training is happening. Capability is not following. 42% of employees say their employer expects them to learn AI on their own, while only 17% use AI tools frequently in their work. A gap that reflects the absence of any shared standard for what AI-ready looks like.

The US Department of Labor's AI Literacy Framework, published on February 13, 2026 through Training and Employment Notice 07-25, changes that. It is not a mandate. No enforcement mechanism exists behind it. But it establishes, for the first time at a federal level, a clear definition of what AI literacy means for workers. It signals the direction of travel for employers who want to stay ahead of workforce expectations rather than catch up to them. This article explains what the framework actually says, what it asks of employers, and what the practical difference is between organisations that treat it as background reading and those that use it.

Section 01

What the Framework Is — and What It Is Not

The DOL's AI Literacy Framework is voluntary guidance, not regulation. It does not impose obligations on private employers, carry penalties for non-compliance, or require any reporting. Organisations can ignore it without legal consequence.

That framing matters, because the temptation when a government body releases a voluntary framework is to file it and move on. That would be a mistake. Not for compliance reasons. For competitive ones.

What the framework actually is: a federal-level definition of AI literacy developed in response to a straightforward problem. AI is reshaping how work gets done across virtually every sector, and the country lacked a shared vocabulary or common structure for training workers to use it. Before this release, AI literacy efforts across the US operated in largely fragmented fashion: community colleges developed their own curricula, private bootcamps marketed proprietary programs, and state workforce agencies ran pilots that rarely scaled. The framework provides a common architecture for the first time.

Employers who align their training programs to it are not meeting a legal obligation. They are building against the benchmark that hiring, procurement, and workforce development conversations will increasingly reference. Secretary of Labor Lori Chavez-DeRemer said at launch: "The Department of Labor is committed to making sure all American workers are able to share in the prosperity that AI will create for our economy."

Section 02

What the Framework Defines as AI Literacy

The DOL defines AI literacy as a foundational set of competencies that enable individuals to use and evaluate AI technologies responsibly, with a primary focus on generative AI as the technology most central to the modern workplace.

That definition has a deliberate scope. The framework is not describing what AI engineers or data scientists need. It is describing what every worker needs: the baseline capability to engage with AI tools confidently, appropriately, and without creating risk for the organisation or themselves. The framework organises this into five foundational content areas:

5
DOL AI Literacy Framework — Content Areas
Understanding AI principles · Exploring potential uses · Directing AI effectively · Evaluating outputs · Using AI responsibly
7
DOL AI Literacy Framework — Delivery Principles
The framework specifies not just what to teach, but how. Section 4 covers all seven principles and what they mean for current program design.

What is notable about that content list is what it excludes. Technical proficiency — building models, writing code, configuring systems — is explicitly out of scope. The most in-demand AI skills in 2026 are not deeply technical. They are interpretive, applied, and judgment-driven. The framework is asking for judgment, not expertise. That distinction matters enormously for how employers design and target their training investment, and it is the same distinction that separates awareness training from genuine upskilling.

Section 03

What It Says Specifically About Employers

The framework addresses employers directly, and the guidance it offers is more operational than most voluntary frameworks tend to be. Rather than abstract principles, it describes specific decisions employers should make.

It asks employers to start by reviewing current workflows where AI tools are already emerging: drafting, data analysis, customer communication. The framework asks them to assess how basic AI literacy helps employees work more effectively in those specific contexts. Critically, it explicitly asks employers to identify what level of AI literacy different roles require, rather than applying a single standard across the workforce.

This role-differentiated approach is one of the framework's most important practical signals. Nearly a quarter of enterprise leaders cite untailored learning paths as a key failure of their current training programs. The DOL is not suggesting that a warehouse operative and a senior analyst need identical AI literacy training. It is asking employers to do the work of mapping what each role actually requires, and then building training that matches that map. Understanding AI training requirements by role is the starting point for that mapping exercise.

The framework also asks employers to provide clear internal guidance on appropriate AI use, to encourage hands-on practice built around real workplace tasks, and to identify roles that may require deeper proficiency beyond baseline literacy. Bayer's Data Academy illustrates what this looks like at scale: the company built a multi-tier program to strengthen foundational digital and AI fluency across the enterprise, with over 90% of learners reporting improved innovation or processes after completing training. Each of the framework's asks maps directly to program design decisions that most organisations have not yet made explicitly.

The Practical Ask

The framework is not asking employers to build something from scratch. It is asking them to be deliberate about something most are already doing informally: deciding which tools, which roles, and which standards. The difference between informal and deliberate is measurability — and measurability is what separates programs that improve from programs that stagnate.

Section 04

The Seven Delivery Principles — and Why They Challenge Most Current Programs

The most practically useful part of the framework for employers is not the content areas. It is the seven delivery principles, which describe how AI literacy training should be structured to actually produce capability rather than completion rates. Read against the most common AI training format in 2026 — a combination of online-based modules and occasional instructor-led sessions — they constitute a direct challenge to what most organisations are currently doing.

1
Enable experiential learning
The framework is unambiguous that AI literacy is most effectively developed through direct, hands-on use. Workers build confidence and understanding by using AI in real-world contexts to solve actual tasks. Passive content consumption is specifically not sufficient. If your training program is primarily video and reading, it is not meeting this principle.
2
Embed learning in context
Training should be anchored to the specific workflows, tools, and industry terminology of each role. Generic content delivered to a mixed audience produces generic results. A compliance officer and a marketing manager need different examples, different tools, and different definitions of what good looks like.
3
Build complementary human skills
The framework frames AI as an amplifier of human input, whose value depends on the skills and judgment of the people who design, manage, and interact with it. Training should reinforce critical thinking, communication, and domain expertise alongside tool use. Not treat them as separate.
4
Address prerequisites to AI literacy
Not all employees start from the same baseline. Digital literacy, language access, and basic technology familiarity are prerequisites that must be addressed before AI-specific training can be effective. Organisations that skip this step produce AI training graduates who are still lost in practice.
5
Create pathways for continued learning
Foundational literacy is a starting point, not a destination. Programs need visible routes to deeper, role-specific proficiency. A single module with a completion certificate at the end is not a pathway. It is a ceiling. The framework expects employees to be able to progress beyond it.
6
Prepare enabling roles
Managers, HR leads, and L&D teams need their own targeted AI literacy, shaped around their specific function of supporting others through adoption. This is consistently the most underinvested layer in most programs. Its absence limits the effectiveness of everything else.
7
Design for agility
AI technologies evolve at a pace unlike previous workplace tools, with new capabilities and platforms emerging every few months while older tools become obsolete just as quickly. AI literacy is not a fixed curriculum. A training program that cannot be updated is a training program that is already falling behind.

The pattern is consistent across enterprise research: passive training formats consistently struggle to build applied AI capability. The framework is not describing what most organisations are doing. It is describing what they should be doing instead. The gap between the two is where most AI skills gaps live.

Section 05

How This Compares to the EU AI Act's Training Obligations

Employers operating across both US and EU jurisdictions are now navigating two distinct frameworks, with a critical structural difference between them.

United States
DOL AI Literacy Framework
Voluntary. No enforcement mechanism. No penalties for non-compliance. Published February 2026 through Training and Employment Notice 07-25. Sets a federal-level content and delivery standard that hiring and procurement conversations will increasingly reference.
European Union
EU AI Act Article 4
Mandatory. Requires deployers of AI systems to ensure staff have a sufficient level of AI literacy. Enforcement for high-risk AI deployers begins August 2026. Non-compliance can compound penalties for other violations. Applies to any organisation with EU employees, customers, or operations.

The practical implication for multinational employers is that the EU obligation sets the legal floor, while the DOL framework provides a useful content structure for meeting it across a US workforce. The five content areas the DOL defines are substantively consistent with what Article 4 requires in terms of contextual, role-appropriate literacy.

Organisations that build training programs aligned to the DOL framework are not inadvertently meeting the EU AI Act obligation — but they are much closer to it than organisations running generic awareness sessions. For companies with employees in both jurisdictions, the DOL framework is a practical starting architecture for a single program that goes a long way toward satisfying both. The full picture of EU deployer obligations is in the EU AI Act deployer obligations guide, and the Article 4 requirements in detail in what the EU AI Act means for your team's training.

Section 06

What "Voluntary" Actually Means for Employer Risk

The voluntary nature of the framework does not mean it is consequence-free to ignore. Three practical risks apply to employers who treat it as irrelevant.

Workforce expectations. 55% of employees say access to AI training or certification would make them more likely to stay with an employer, and when employers provide AI training, AI adoption jumps to 76% compared to just 25% without support. As the DOL framework filters into hiring conversations and workforce development programs, employees will increasingly know what AI literacy means, and will notice when their employer's training falls short of it.

Procurement and contracting. Organisations bidding for federal contracts or partnering with public institutions will find AI literacy expectations embedded in those conversations at an increasing rate. The framework provides a shared vocabulary that these conversations will use.

Regulatory trajectory. Voluntary frameworks consistently precede mandatory ones. The current administration has already proposed removing state-level AI laws in favour of a national standard, a direction that makes the DOL framework's definitions more likely to be codified over time, not less. Treating it as an early signal of where US policy is heading is more accurate than treating it as a permanent opt-out.

The Timing Argument

None of these risks are immediate. All of them compound over time for organisations that wait. The employers in the strongest position in 2027 and 2028 are the ones building deliberately against this framework in 2026. Not the ones scrambling to catch up once the conversations require it.

Section 07

Three Things to Do With the Framework Now

Voluntary guidance is most useful when it produces specific decisions, not general awareness. Here are the three most actionable things employers can take from the DOL framework.

1
Map your roles to literacy levels
The framework is explicit that different roles require different levels of AI literacy. 59% of enterprise leaders report an AI skills gap in 2026 even though most are already investing in some form of training, and only 35% say they have a mature, organisation-wide upskilling program. Most organisations have not done the role-mapping that would let them close that gap deliberately. Doing it now — identifying which roles need baseline literacy, which need applied proficiency, and which need deeper capability — is the foundation for everything else. What employees need by role covers that mapping in detail.
2
Audit your current training against the seven delivery principles
Take each of the seven delivery principles and assess honestly whether your current program meets it. Experiential learning. Role context. Pathways for progression. Agility. Organisations with mature AI upskilling programs are nearly twice as likely to report significant positive AI ROI as those without one. That audit tells you where to focus investment. It is a more honest diagnostic than a completion rate.
3
Build a review cadence into your program
The DOL has explicitly committed to updating the framework to reflect evolving digital and AI competency requirements. Any training program built against it needs the same mechanism. Not a fixed curriculum that falls out of date as the tools it covers are replaced. A living program with a named owner, a review schedule, and a process for incorporating what changes. The same agility principle the framework asks employers to build into training programs applies to the programs themselves.
Aligning Your AI Training Program to the DOL Framework
We have mapped AI literacy requirements by role, not applied a single standard to all employees regardless of function or seniority.
Our training includes hands-on practice with real tools on actual work tasks — not passive content consumption or abstract scenarios.
Training content is contextualised to specific roles, workflows, and industry context rather than delivered as generic AI literacy to all employees at once.
Managers and L&D leads have received AI literacy training suited to their enabling role, not the same content delivered to individual contributors.
We have a structured pathway from foundational literacy to role-specific proficiency, so employees know where to go after completing an introductory module.
Our training program has a defined review cadence and a named owner responsible for updating content as tools and requirements evolve.
We have internal guidance on appropriate AI use that employees are trained to follow, not just published on an intranet.
We have assessed and addressed any digital literacy prerequisites in our workforce before deploying AI-specific training.
Frequently Asked Questions
DOL AI Literacy Framework — Common Questions
Answers to the questions HR leads, L&D teams, and compliance functions most commonly ask about the framework and what it means in practice.
What is the US Department of Labor AI Literacy Framework?
The framework was published on February 13, 2026 through Training and Employment Notice 07-25. It is voluntary guidance that establishes the first federal-level definition of AI literacy for workers, organised into five foundational content areas: understanding AI principles, exploring potential uses, directing AI effectively, evaluating outputs, and using AI responsibly. It also specifies seven delivery principles that describe how effective AI literacy training should be structured. It is not a mandate and carries no enforcement mechanism. It represents the first shared national benchmark for what AI-ready looks like in the US workforce.
Is the DOL AI Literacy Framework mandatory for employers?
No. The framework is voluntary guidance with no penalties for non-compliance. However, voluntary frameworks consistently precede mandatory ones, and the framework's definitions are likely to appear in federal contracting conversations, workforce development programs, and employee expectations as it becomes mainstream. Organisations that align to it now are building against the benchmark that these conversations will increasingly reference — which is a better position than scrambling to align once it matters.
What are the seven delivery principles in the DOL AI Literacy Framework?
The seven principles are: enable experiential learning through hands-on use; embed learning in the specific context of each role; build complementary human skills alongside tool use; address digital literacy prerequisites; create pathways for continued learning beyond foundational modules; prepare enabling roles including managers and L&D leads; and design for agility so training can be updated as tools evolve. Together these principles describe what effective AI literacy training looks like in practice. They directly challenge the most common current training format of online modules and occasional instructor-led sessions.
How does the DOL framework compare to the EU AI Act's training obligations?
The EU AI Act's Article 4 literacy obligation is mandatory, requiring deployers of AI systems to ensure staff have sufficient AI literacy, with enforcement for high-risk AI deployers beginning August 2026. The DOL framework carries no enforcement mechanism. For multinational employers, the EU obligation sets the legal floor while the DOL framework provides a practical content structure. Building against the DOL framework puts organisations substantially closer to meeting Article 4 than running generic awareness sessions. See what the EU AI Act means for your team's training for the full Article 4 picture.
What should employers do with the DOL AI Literacy Framework?
Three things. First, map your roles to literacy levels — the framework is explicit that different roles require different levels, and most organisations have not done this mapping. Second, audit your current training against the seven delivery principles to identify where investment is needed. Third, build a review cadence into your program, since the DOL has committed to updating the framework as AI evolves. Voluntary guidance is most useful when it produces specific decisions, not general awareness — and all three of these produce decisions.
The DOL framework defines what
AI-ready employees look like.

Savia's learning paths are built around the same principles: role-specific, applied, and designed for progression beyond awareness. If your current program does not meet the framework's delivery standards, the AI Literacy and GRC learning paths are where to start.