Most organisations using AI tools today are doing so without a formal policy governing how. Employees are making individual judgments about what to put into AI systems, which tools are appropriate for which tasks, and what to do when an output looks wrong. Some of those judgments are sound. Many are not.

The scale of the problem is not theoretical. Only 10% of companies have a comprehensive, formal AI policy in place, and more than one in four say no policy exists in their workplace, nor is there any plan for one. At the same time, employees are already acting. 38% of employed AI users have submitted sensitive work information to AI tools without their employer's knowledge, according to a 2024 survey of over 7,000 workers.

An AI acceptable use policy changes that. It sets out, clearly and in plain language, what AI tools employees are authorised to use, what they are not, what kind of information should never enter an AI system, and what the organisation expects from anyone using AI in their work. This guide walks through how to write one. A policy that employees cannot follow because it is too vague, or will not follow because it was never communicated, is not a policy. It is a document.

Section 01

Decide What the Policy Is Actually Trying to Govern

Before any drafting begins, leadership needs to be clear about what problem the policy is solving. Organisations that skip this step produce policies that are either too broad to be actionable or too narrow to cover the risks that actually matter.

There are three distinct things an AI acceptable use policy can govern, and most organisations need to address all three.

Tool authorisation — which AI tools employees are permitted to use, and in what contexts. This matters because employees are often already using tools the organisation has not approved, including free consumer versions that handle data very differently from enterprise equivalents.

Data handling — what information can and cannot be entered into an AI system. Customer data, personal data, commercially sensitive information, legally privileged content: these categories each carry different risk profiles and need different rules.

Output responsibility — who is accountable for AI-assisted work, how outputs should be reviewed before use, and when AI-generated content requires disclosure.

Most organisations have gaps across all three. Knowing where yours are largest shapes everything that follows — the scope of the audit, the specificity of the tool tiers, the emphasis of the data rules. Before any drafting, you need to understand the real landscape. Which is why the next step is not writing. It is observing. For the broader governance layer that sits around the policy, AI content accountability covers how to manage risk, quality, and ownership in AI-assisted workflows.

Section 02

Audit What Is Already Happening

A policy written in isolation from how employees are already using AI will miss the most important risks and create rules that bear no relationship to actual behaviour. Before drafting, find out what tools employees are currently using, in which teams, for which tasks. A short survey or a series of team conversations will reveal the landscape quickly. What you are looking for is the gap between what you think is happening and what is actually happening.

In most organisations, that gap is significant — and it runs in a predictable direction. Employees are not typically entering the most obviously sensitive data into AI tools. They are entering the materials they use every day: meeting notes, customer queries, support tickets, draft proposals. The risk is not malicious intent. It is routine work colliding with the wrong tool.

27.4%
Cyberhaven — AI Data Exposure Report, 2024
of corporate data entered into AI tools in 2024 was sensitive, up from 10.7% the year before. The most common categories: customer support records and source code.
71.6%
LayerX Security — Enterprise GenAI Usage Report, 2025
of generative AI access in enterprise environments happens via non-corporate accounts, outside any organisational visibility, control, or data processing agreement.

The audit has two functions: it gives you accurate inputs for policy design, and it gives you a baseline against which you can measure whether the policy, once published, actually changes behaviour. A policy you cannot measure is a policy you cannot improve. Once you know what is actually happening, you can start turning that picture into a coherent set of rules — beginning with how tools are categorised.

Section 03

Define Your Tool Categories

One of the most practically useful things an AI acceptable use policy can do is give employees a clear, simple framework for understanding which tools are approved for which purposes. The most workable approach is a tiered structure with three levels.

Tier 1
Approved for general use
Tools the organisation has evaluated, procured under enterprise agreements, and determined appropriate for standard work tasks. May include conditions: approved for internal drafting but not for customer-facing content without human review, for example. Employees can use these without seeking additional permission.
Tier 2
Approved for specific purposes only
Tools appropriate in defined contexts but not across the board. A transcription tool approved for internal meetings but not for client calls. A translation tool approved for general content but not for legal documents. Employees need to understand the scope before using these.
Tier 3
Not approved
Tools employees should not use for work purposes. This category should explain why, not simply prohibit. Samsung's experience in 2023 illustrates what happens when prohibition is not paired with reasoning: three engineers used ChatGPT to debug proprietary source code and summarise internal meetings, not out of malice but because no approved alternative existed. Every prompt they entered became part of OpenAI's training data. Samsung's response was a blanket ban, and it pushed usage underground rather than governing it.

A clear tiered structure removes the ambiguity that leads to poor individual decisions. Most employees make reasonable choices when they understand the reasoning behind the rules. Most make poor ones when they have to choose between a vague prohibition and a practical problem that AI could solve in thirty seconds. But knowing which tools are approved is only half the picture. The other half is knowing what information can go into them.

Section 04

Write the Data Handling Rules

The data handling section is where legal and compliance risk is most concentrated, and where vague language causes the most damage. Telling employees "do not enter sensitive information" sounds like a rule. It is not one. Employees do not always know what qualifies as sensitive, and the categories that matter differ by industry, role, and jurisdiction. The policy needs to name them.

LayerX Security's 2025 enterprise report found that nearly 40% of files pasted into AI tools contain PII or payment card data, and 22% of pasted text includes sensitive regulatory information. Most employees would not instinctively classify either category as off-limits.

The policy needs to name categories explicitly. At minimum, most organisations should prohibit entry into any non-approved AI system of:

Personal data of employees or customers as defined under GDPR — names, contact details, performance records, health information, or any data that could identify an individual
Commercially confidential information including financial data, pricing, unreleased product information, and strategic plans
Legally privileged communications — correspondence with legal counsel, draft contracts, litigation materials
Credentials, access tokens, or authentication data of any kind

For organisations in regulated sectors, such as financial services, healthcare, and legal, the list will be longer and the policy will need to interact with existing data classification frameworks rather than replace them.

When writing this section, the most effective format is a short table or list with three columns: the data category, a plain-language description of what it includes, and the rule that applies. "Personal data — names, contact details, performance records — must not be entered into any AI system without an active DPA" is enforceable. "Be careful with sensitive data" is not. The goal is a section that an employee can read in two minutes and use to make a decision in five seconds. Once you have governed what goes in, the policy needs to address what comes out.

The data handling rules are also where GDPR intersects most directly with AI governance. Any AI tool that processes personal data on behalf of the organisation requires a data processing agreement. The policy should make clear that using an AI tool for tasks involving personal data without a DPA in place is not permitted. US federal agencies issued 59 new AI-related regulations in 2024 alone, more than double the previous year, and the enforcement trajectory makes data handling the section most likely to be tested first.

⚠ The GDPR-AI Intersection

If an employee pastes customer data into a consumer AI tool with no DPA, your organisation may have committed a GDPR data breach. The test is not whether anything went wrong. It is whether personal data was processed by an unauthorised third party. The policy needs to name this risk explicitly so employees understand the stakes, not just the rule.

Section 05

Set Expectations Around Output and Accountability

One of the most consequential gaps in most organisations' current AI practice is the absence of any clear standard for how AI-generated or AI-assisted outputs should be reviewed before use. Employees who use AI to draft content, generate analysis, or produce summaries often apply less scrutiny than they would to their own work. Not out of carelessness, but because the output looks finished.

WIRED's internal AI editorial policy, published in early 2023, addressed this directly: undisclosed AI-generated text would be treated as plagiarism. It was a deliberately unambiguous standard that made accountability concrete rather than aspirational. Most organisations need the equivalent logic applied to their own context.

The policy should establish, without ambiguity, that responsibility for any AI-assisted output sits with the employee who uses it. This means the employee is expected to review outputs for accuracy, appropriateness, and fit before using them. "AI generated it" is not a defence for an error that reaches a customer, a regulator, or a decision-maker.

Where disclosure of AI involvement is required, in certain regulatory contexts or as a matter of internal quality standards, the policy should specify when and how that disclosure is made. Vague commitments to transparency are not enforceable. Named conditions are. And for organisations operating in the EU, transparency requirements are not just an internal choice — they are starting to be a legal one.

The Accountability Principle

The policy should make one thing unambiguous: the employee who uses an AI output owns it. Signing off on AI-generated work without reviewing it is the same, for accountability purposes, as signing off on work you did yourself without reviewing it. Most employees understand this intuitively once it is stated. Most policies never state it.

Section 06

Address the EU AI Act and Regulatory Context

For organisations operating in the EU, or with EU employees or customers, an AI acceptable use policy does not exist in a regulatory vacuum. The EU AI Act imposes specific obligations on deployers of high-risk AI systems — and a well-written acceptable use policy creates the operational foundation for meeting several of them directly.

Practically, this means the policy needs to address four things the Act requires of deployers. First, that AI systems are used within their intended purpose as documented by the provider — your policy's tool authorisation section should reference this explicitly. Second, that human oversight is assigned and operable — your policy should name who is responsible for oversight of each high-risk system, not just state that oversight is required. Third, that workers are notified before AI systems affecting them are deployed — your policy should include a commitment to this notification, not leave it as an informal practice. Fourth, that the organisation can demonstrate compliance — which means training records, incident logs, and documentation of the policy itself need to be retained.

The Act also bans certain AI uses outright — biometric surveillance and social scoring without oversight among them — so the policy's "not approved" tier should explicitly prohibit any use that falls into these categories, not just tools that have been informally disfavoured. Get legal review before publication if your organisation uses AI in recruitment, credit, benefits, or other Annex III high-risk categories. The full picture of what deployers must do is in the EU AI Act deployer obligations guide, and the training requirements are covered in what the EU AI Act means for your team's training. A policy that survives legal review is necessary. A policy that employees have never heard of is not.

⚠ Penalties for Non-Compliance

Violations of high-risk AI obligations under the Act carry fines of up to €15 million or 3% of total worldwide annual turnover, whichever is higher. An acceptable use policy that is inconsistent with Article 26 obligations does not reduce that exposure. It just makes the inconsistency harder to spot before enforcement.

Section 07

Communicate It — and Train People to Follow It

A policy that is published on an intranet and never mentioned again will not change behaviour. Communication and training are not optional extras on top of the policy. They are the mechanism by which the policy has any effect at all.

Leadership should introduce the policy directly, explaining why it exists and what problem it solves. A policy that arrives without context reads as restriction. A policy introduced with reasoning reads as guidance. The difference in employee reception is significant. One-third of executives believe their company tracks all AI usage, but only 9% actually have working governance systems. That gap widens when employees receive rules without the reasoning behind them and route around them accordingly.

Training should follow, and it needs to be specific to the policy rather than a generic AI awareness session. Employees should leave knowing exactly which tools they can use for which tasks, what they should never enter into an AI system, and what to do if they are uncertain. If they cannot answer those three questions, the training has not done its job. The distinction between awareness and the applied capability needed to follow a policy in real decisions is covered in AI upskilling vs. AI awareness training. And even the best-communicated policy will need revisiting — because the environment it governs will not stay still.

What Good Communication Looks Like

A financial services firm rolls out its AI acceptable use policy with a thirty-minute all-hands where the CEO explains specifically why the policy exists, citing two recent incidents where client data was entered into non-approved tools, and what will change. Every manager receives a one-pager mapping the policy's rules to their team's actual tool usage. Training is delivered in the following two weeks, role by role, covering the specific tools each function uses and the specific rules that apply to them.

Six months later, the audit baseline shows a 60% reduction in sensitive data entering non-approved AI tools. The policy did not cause that reduction. The communication and training caused it. The policy provided the framework they could work from.

Section 08

Build In a Review Cadence

An AI acceptable use policy written today and left unchanged through next year will be out of date long before anyone notices. The tools available to employees change. The regulatory environment changes. The ways employees are actually using AI change.

AI usage policies have become the new norm as businesses across industries adopt various AI technologies, and many companies are already revisiting and updating their policies to become more permissive while meeting new transparency requirements. A policy that was appropriately cautious in 2023 may be unnecessarily restrictive by 2025, or inadequately specific by 2026. Both directions carry costs.

Build a review cycle into the policy itself: six months for the first review after publication, then annually unless a significant regulatory or tooling change requires an earlier update. Assign ownership clearly. Someone needs to be responsible for triggering the review, gathering input from legal, HR, and operational leads, and publishing the revised version.

The first version of the policy does not need to be perfect. It needs to be accurate, specific, and communicated. A living document that is regularly reviewed and updated is more valuable than an exhaustive one that is never touched again.

AI Acceptable Use Policy — What to Cover
Tiered tool authorisation — approved, conditionally approved, and prohibited tools named explicitly, with reasons provided for each tier.
Data handling rules — specific categories of information that must not enter non-approved AI systems, named by category rather than described generically.
Output accountability — clear statement that responsibility for AI-assisted work rests with the employee who uses it, regardless of how it was generated.
Disclosure requirements — when and how AI involvement must be declared, named by context rather than left to individual judgment.
GDPR alignment — DPA requirements for tools processing personal data, and a clear statement that no such tool may be used without one.
EU AI Act consistency — policy reviewed against Article 26 obligations where applicable, particularly for organisations using AI in recruitment, credit, or benefits decisions.
Communication plan — how and by whom the policy will be introduced to employees, with leadership visibility and reasoning, not just a notification email.
Role-specific training — practical guidance on following the policy for each function that uses AI tools, not generic awareness content.
Review cadence — named owner and scheduled review date included in the document itself, not managed informally.
Frequently Asked Questions
AI Acceptable Use Policy — Common Questions
Answers to the questions HR, legal, and compliance leads most commonly ask when drafting or auditing an AI acceptable use policy.
What tone should an AI acceptable use policy be written in?
Plain language, written for the people who have to follow it — not for lawyers. The test is whether an employee without a compliance background can read the relevant section and know exactly what to do. That means avoiding phrases like "appropriate safeguards" or "sufficient oversight" without defining them. It also means writing from the employee's perspective: "do not paste customer names into ChatGPT" lands differently than "personal data must not be processed via non-approved AI systems." Both say the same thing. Only one will be followed. A policy written to be defensible in litigation is not the same as a policy written to change behaviour. You need the second one.
Who should own the AI acceptable use policy?
Ownership typically sits with a named lead across legal, HR, or a dedicated AI governance function — but the more important question is who is responsible for updating it. Many organisations have a policy that someone wrote once and nobody owns. That is worse than no policy, because it creates a false impression of governance. The policy should name both an owner and a deputy, and include their role titles rather than personal names so ownership survives staff turnover. Legal drafts it. HR communicates it. Operational leads provide the inputs for each update cycle. The owner coordinates all three.
How should AI tools be categorised in an acceptable use policy?
A tiered structure works best: approved for general use (evaluated and procured under enterprise agreements); approved for specific purposes only (appropriate in defined contexts with conditions); and not approved (with an explanation of why, not just a prohibition). Samsung's 2023 experience shows what happens when prohibition is not paired with reasoning. Employees route around it. A tiered structure removes the ambiguity that leads to poor individual decisions.
What data should be prohibited from AI systems in an acceptable use policy?
At minimum: personal data of employees or customers as defined under GDPR; commercially confidential information including financial data, pricing, and unreleased product information; legally privileged communications; and credentials or authentication data of any kind. Name these categories explicitly. Generic language like "sensitive information" is not actionable. For regulated sectors, interact with existing data classification frameworks rather than replacing them. Any AI tool processing personal data requires a data processing agreement.
How often should an AI acceptable use policy be reviewed?
Six months for the first review after publication, then annually unless a significant regulatory or tooling change requires earlier action. Build the review cycle into the policy document itself with a named owner. Not managed informally, which means it will not happen. A policy written today and left unchanged through next year will be out of date before anyone notices.
A policy sets the rules.
Training is how employees learn to follow them.

Savia's AI governance and AI literacy learning paths give employees the practical judgment to apply your acceptable use policy in real decisions, not just read it once and forget it.