HR teams occupy an unusual position in the AI landscape. They're simultaneously some of the heaviest users of AI in their daily work and the function most directly exposed to the legal consequences when AI gets something wrong. Recruitment screening, performance evaluation, workforce planning, and employee monitoring all sit within HR's remit. All of them are now explicitly regulated in ways that most HR professionals haven't been trained to navigate.

The awareness gap here is striking. 57% of HR professionals who work in states with workforce-related AI laws say they're not aware of those policies. Of those who are aware, only 12% have implemented practices to be compliant. Those aren't comfortable numbers for a function that carries direct legal accountability for employment decisions.

To be clear on the foundational point: employers remain responsible for employment decisions even when AI tools contribute to them. Technology doesn't eliminate employer accountability. The five training areas in this article are built around that reality. This article builds on the broader role framework in AI Literacy by Role.

Section 01

Why HR Faces a Different Training Problem

The training challenge for HR is structurally different from every other knowledge worker function. For finance analysts or marketing teams, AI training is primarily about managing output quality: catching errors before they reach clients or decisions. For HR, the problem is larger than that.

The AI outputs aren't just occasionally wrong. They're systematically wrong in ways that track protected characteristics, and the consequences of those errors aren't quality failures. They're discrimination claims.

A 2025 UW study found that recruiters who reviewed applicants using AI tools with bias built into the models mirrored the inequitable choices of the AI up to 90% of the time. When those same recruiters made decisions without AI, or with unbiased AI, they chose candidates equally regardless of demographic group. That's the training problem in a single data point. The issue isn't that HR professionals are independently biased. It's that AI makes their decision-making worse in specific, systematic, and legally actionable ways. Training that doesn't address this mechanism can't address the risk.

57%
SHRM — State of AI in HR 2026
of HR professionals in states with workforce AI laws say they're not aware of those policies.
12%
SHRM — State of AI in HR 2026
of HR professionals who are aware of applicable AI laws have actually implemented compliant practices.
90%
University of Washington via HR Brew, 2025
of the time, recruiters using biased AI tools mirrored the AI's inequitable candidate choices.
Section 02

The Regulatory Landscape HR Must Understand

No other workplace function is subject to the volume and specificity of AI regulation that now governs HR decisions. And this isn't something HR professionals can treat as a compliance team problem. They need to understand it operationally: which tools trigger which obligations, and what they personally need to do as a result.

Illinois HB 3773 — effective January 2026
Applies existing anti-discrimination standards to AI used in employment decisions. Employers are responsible for ensuring AI tools used in hiring, promotion, discipline, and termination don't produce unlawful discriminatory outcomes.
Colorado AI Act — risk management by June 2026
Requires risk management programmes for high-risk AI systems. Employment AI sits squarely within the high-risk classification.
California FEHA amendments
Clarify how existing civil rights protections apply when automated tools are used in hiring and employment evaluations. Disclosure obligations to applicants and employees are now explicit.
Texas — notice obligations
Joins California, Illinois, and Colorado in requiring notice to applicants and employees when AI is used in employment decisions. Multi-state employers now face compounding obligations.
EU AI Act — high-risk classification
Classifies all employment AI as high-risk, requiring mandatory conformity assessments and documented human oversight. See the high-risk requirements guide for what this means operationally.

The practical implication for training: HR professionals need to know which tools they're currently using fall within regulated categories, what documentation those tools require, and what human oversight means in practice for each tool type. Not as a checklist. As a defensible operational standard.

⚠ The Compliance Gap

Despite this regulatory activity, 57% of HR professionals in states with workforce AI laws don't know those laws exist. Of those who do, only 12% have implemented compliant practices. For organisations operating under the EU AI Act, the gap between policy on paper and governance that actually works is where deployer obligations become personal accountability.

Section 03

Training Area One — Recruitment and Hiring AI

1
Recruitment and Hiring AI
Estimated training time: 90 minutes

44% of HR teams now use AI for applicant screening, and 32% use these tools to automate candidate searches. 93% of recruiters plan to increase their AI usage this year to meet intensifying hiring goals. The tools driving this include Eightfold AI, HireVue, Workday Recruiting, and iCIMS — all of which incorporate AI scoring and ranking at various points in the screening process.

The legal exposure has arrived. In January 2026, job applicants filed a proposed class action against Eightfold AI, alleging the company compiled hidden candidate dossiers and scoring reports without disclosing its process to applicants. That case is a useful training reference because it illustrates exactly the transparency and documentation failures that HR professionals need to identify before they become a liability.

The mechanism of bias in ATS and screening tools is specific and teachable. It's not that the algorithm is malicious. It's that AI trained on historical hiring data learns which profiles your organisation has historically hired and weights future candidates accordingly. Algorithms perpetuate historical biases embedded in their training data and amplify past discriminatory practices at scale. And here's the part that catches most HR professionals off guard: if your vendor's AI is biased, you face the legal consequences. Vendor selection is not a procurement decision you can outsource the accountability for.

Training scenario

An HR professional receives two shortlists for the same role: one ranked by an AI screening tool, one ranked by a human recruiter reviewing CVs blind. The professional must identify three specific differences between the shortlists, determine which could represent systematic bias rather than genuine qualification differences, and describe the investigation they'd conduct before proceeding with interviews.

Learning objective: Identify at least two potential bias indicators in an AI-generated shortlist, explain the mechanism by which each could arise from training data, and describe a specific investigation step for each — demonstrated through written analysis, not multiple-choice assessment.
Section 04

Training Area Two — Performance Management AI

2
Performance Management AI
Estimated training time: 75 minutes

Performance management AI carries a distinct set of risks from recruitment AI. The harms are less visible because they accumulate over time rather than presenting at a single decision point, but they're equally actionable.

Think about what this actually looks like day-to-day. AI now appears throughout the working day, quietly scoring outputs, flagging attendance, tracking keystrokes, scanning emails for risk, and interpreting video from workplace cameras. Tools like Viva Insights and Aware are generating performance signals that feed directly into review conversations — without the employees being assessed, or in some cases the managers conducting the reviews, knowing exactly what the inputs are.

The specific training need is helping HR professionals distinguish between three things: performance data that AI has generated, performance data that AI has summarised from human-generated records, and performance judgments that require human assessment regardless of what AI surfaces. Those are different situations with different oversight requirements, and conflating them is where the legal exposure lives. When AI metrics proxy for protected characteristics — output volume metrics that disadvantage employees on caregiving leave, for example — the discrimination is structural even when no individual acted with discriminatory intent.

Training scenario

An HR professional receives an AI-generated performance summary for a team member flagged as underperforming, based on output volume metrics over six months. They must identify what the metrics don't capture that a manager's direct observation would, what protected characteristics could be proxied by the metrics being used, and what the appropriate human review step is before this summary influences a promotion, pay, or disciplinary decision.

Learning objective: Apply a structured human review framework to an AI-generated performance assessment, identifying at least two gaps between metric-based AI analysis and defensible performance judgement — demonstrated through a written review plan.
Section 05

Training Area Three — Employee Monitoring AI

3
Employee Monitoring AI
Estimated training time: 75 minutes

Employee monitoring has expanded dramatically with AI, and HR professionals are increasingly responsible for governance decisions about monitoring tools they didn't select and may not fully understand. Dashcams, wearables, productivity analytics platforms, email scanners, and video analytics tools are now in common use across logistics, retail, financial services, and professional services environments.

The legal exposure is a patchwork getting denser. CCPA, BIPA, HIPAA, and a growing body of automated decision tool laws now dictate how employee data can be collected, used, and stored. Employers face rising exposure across biometrics, AI tools, and digital tracking systems. The problem for HR is that most of these tools were procured by IT or operations, and HR is left holding the governance responsibility without having been involved in the procurement decision.

That's the training gap this area addresses. Not how to operate these tools. The question is whether the organisation's use of them is legally defensible, and what to do if it's not.

Training scenario

Three monitoring tools are currently deployed in the organisation: a productivity analytics platform, an AI email scanner, and a video analytics tool used in warehouse operations. For each, the professional must identify what data the tool collects, whether employees have been notified, what the legal basis for collection is in their jurisdiction, and what the appeal mechanism is if an employee disputes a finding.

Learning objective: Conduct a compliance assessment of three AI monitoring tools covering data collection, notification, legal basis, and appeal pathway — demonstrated through a written tool-by-tool analysis.
Section 06

Training Area Four — AI Vendor Due Diligence

4
AI Vendor Due Diligence
Estimated training time: 75 minutes

HR professionals are frequently evaluating, procuring, or recommending AI tools, and are rarely trained to ask the questions that would reveal a tool's governance quality before deployment. This matters because employer liability for vendor algorithm outputs is not a theoretical risk. It's a documented one, with active litigation in multiple US jurisdictions.

The practical gap is straightforward. Most HR professionals don't know what questions to ask during a vendor evaluation that would distinguish a well-governed AI tool from a governance-thin one. A bias testing statement in a vendor deck is not assurance. What was the testing methodology? Which demographic categories were included? How frequently is the model retrained, and with what data? What happens when the employer's own historical data is used for fine-tuning? These are the questions that matter, and they're almost never asked.

The Eightfold AI class action from January 2026 is a useful anchor here. The plaintiffs allege that the governance failures were not in the algorithm's core function but in how data was collected, stored, and used without disclosure. That's a procurement and due diligence failure, not just a technology one. It's the kind of failure that HR training can directly prevent.

Training scenario

An HR professional reviews a vendor's AI screening tool proposal: capabilities summary, bias testing statement, and sample output. They must identify five governance questions absent from the proposal — how was bias testing conducted and against which demographic categories; how frequently is the model retrained; what happens when the employer's historical data is used for fine-tuning; what audit rights does the employer retain; what is the vendor's liability position if the tool produces a discriminatory outcome.

Learning objective: Generate at least four specific governance questions absent from a vendor AI proposal, with articulation of the specific risk each gap creates for the procuring organisation.
Section 07

Training Area Five — Communicating AI Use Transparently

5
Communicating AI Use Transparently to Employees
Estimated training time: 60 minutes

Transparency obligations are one of the most rapidly expanding areas of HR AI regulation. California, Illinois, Colorado, and Texas all now require notice to applicants and employees when AI is used in employment decisions. This isn't a future requirement. It's a current one, and most HR teams aren't yet meeting it.

Beyond legal obligation, there's a practical trust concern. 87% of HR professionals acknowledge that employee preferences for human interaction would prevent full automation of HR, even if technical barriers were removed. Employees who don't understand how AI is being used in decisions that affect them aren't just a compliance risk. They're a culture risk. And the solution isn't longer legal notices. It's clearer communication, which is a skill that needs to be trained.

HR professionals need to learn how to communicate AI use in language that actually enables employees to understand what's being done with their data and how to challenge an AI-influenced decision. That's different from drafting a privacy notice, and it requires different training.

Training scenario

Two versions of an AI disclosure statement for a recruitment process are presented. The first uses generic legal language. The second is plain-language and specific. The professional identifies what the plain-language version includes that the legal version doesn't: what the AI does, what data it uses, what human review step exists, and how a candidate can request reconsideration. They then draft a third version for a performance management AI tool.

Learning objective: Draft a compliant, plain-language AI disclosure for an HR tool covering tool function, data use, human review step, and appeal pathway — assessed against a rubric covering completeness, plain language, and jurisdiction-specific obligations.
Section 08

Programme Design and Time Investment

The five training areas are sequenced deliberately. Regulatory awareness and the bias mechanism — covered in areas one and two — are prerequisites for the more applied areas that follow. An HR professional who can't describe how ATS bias works can't conduct a meaningful vendor due diligence assessment. The sequence matters.

Session Training area Format Time
One Recruitment AI and bias Shortlist analysis exercise 90 minutes
Two Performance management AI Review plan exercise 75 minutes
Three Employee monitoring Compliance assessment exercise 75 minutes
Four Vendor due diligence Proposal critique exercise 75 minutes
Five Transparency communications Disclosure drafting exercise 60 minutes

Approximately 7.5 hours across five sessions, ideally delivered over four to six weeks rather than as a block. Each session produces a written deliverable — a shortlist analysis, a review plan, a compliance assessment, a governance gap report, or a disclosure draft — that the professional can use directly in their work. These aren't in-session exercises. They're documents that hold up when something goes wrong and someone asks what oversight was applied.

HR AI Training — Design Checklist
Regulatory awareness is covered first — which tools trigger which obligations in which jurisdictions, not generic compliance awareness.
Bias mechanism training is specific and teachable — how training data produces skewed outputs, not just that bias exists.
Performance management AI is treated separately from recruitment AI — different tools, different failure modes, different oversight requirements.
Vendor due diligence training generates specific questions — not a checklist of what good looks like, but practice asking the questions that reveal governance quality.
Each session produces a written deliverable that the professional can use directly in their governance role.
Transparency training addresses communication, not just policy — drafting plain-language disclosures that employees can actually act on.
Frequently Asked Questions
HR AI Training — Common Questions
Answers to the questions HR directors, L&D leads, and compliance teams most commonly ask when designing AI training for HR functions.
What AI training do HR teams actually need in 2026?
Training across five specific areas: recruitment and hiring AI (bias identification in screening tools, shortlist auditing); performance management AI (distinguishing AI-generated data from AI-summarised data from genuine human judgment); employee monitoring AI (compliance assessment of tools HR didn't select but now owns governance for); AI vendor due diligence (governance questions to ask before procurement); and transparent employee communications about AI use. The full role framework is in AI Literacy by Role.
Why is HR's AI training problem different from other functions?
For most knowledge worker functions, AI training is primarily about managing output quality. For HR, the problem is larger. The AI outputs aren't just occasionally wrong — they're systematically wrong in ways that track protected characteristics, and the consequences are discrimination claims, not quality failures. A 2025 University of Washington study found that recruiters using biased AI tools mirrored the AI's inequitable choices up to 90% of the time. The issue isn't that HR professionals are independently biased. It's that AI makes their decision-making worse in specific, systematic, and legally actionable ways.
What AI regulations apply to HR teams in 2026?
The US state-level picture is moving fast. Illinois HB 3773 (effective January 2026) applies anti-discrimination standards to AI in employment decisions. Colorado requires risk management programmes for high-risk AI systems by June 2026. California, Illinois, Colorado, and Texas all require notice to applicants and employees when AI is used in employment decisions. Internationally, the EU AI Act classifies all employment AI as high-risk. Despite this, 57% of HR professionals in states with workforce AI laws don't know those laws exist.
Who is legally responsible when an AI screening tool discriminates?
The employer is. Employers remain legally responsible for employment decisions even when AI tools contribute to them. This applies to vendor algorithm outputs too: if your background check provider's AI is biased, you face the legal consequences. The Eightfold AI class action filed in January 2026 and the Mobley v. Workday case — where a federal court granted preliminary class certification for AI screening discrimination — have both established this clearly. Vendor selection is not a procurement decision you can outsource the accountability for.
How long does HR AI training take?
Five sessions totalling approximately 7.5 hours: recruitment AI and bias (90 minutes), performance management AI (75 minutes), employee monitoring (75 minutes), vendor due diligence (75 minutes), and transparency communications (60 minutes). Each session produces a written deliverable — a shortlist analysis, review plan, compliance assessment, governance gap report, or disclosure draft — that the professional can use directly in their work. Ideally delivered over four to six weeks, not as a single block.
HR teams carry more AI accountability
than almost any other function — and are among the least trained for it.

Savia's role-specific AI learning paths include HR-specific content built around the actual tools, regulatory obligations, and governance decisions HR professionals face. Practical, scenario-based, and designed to produce the kind of defensible judgement that HR decisions now require.