Most organisations using AI tools today are doing so without a formal policy governing how. Employees are making individual judgments about what to put into AI systems, which tools are appropriate for which tasks, and what to do when an output looks wrong. Some of those judgments are sound. Many are not.
The scale of the problem is not theoretical. Only 10% of companies have a comprehensive, formal AI policy in place, and more than one in four say no policy exists in their workplace, nor is there any plan for one. At the same time, employees are already acting. 38% of employed AI users have submitted sensitive work information to AI tools without their employer's knowledge, according to a 2024 survey of over 7,000 workers.
An AI acceptable use policy changes that. It sets out, clearly and in plain language, what AI tools employees are authorised to use, what they are not, what kind of information should never enter an AI system, and what the organisation expects from anyone using AI in their work. This guide walks through how to write one. A policy that employees cannot follow because it is too vague, or will not follow because it was never communicated, is not a policy. It is a document.
Decide What the Policy Is Actually Trying to Govern
Before any drafting begins, leadership needs to be clear about what problem the policy is solving. Organisations that skip this step produce policies that are either too broad to be actionable or too narrow to cover the risks that actually matter.
There are three distinct things an AI acceptable use policy can govern, and most organisations need to address all three.
Tool authorisation — which AI tools employees are permitted to use, and in what contexts. This matters because employees are often already using tools the organisation has not approved, including free consumer versions that handle data very differently from enterprise equivalents.
Data handling — what information can and cannot be entered into an AI system. Customer data, personal data, commercially sensitive information, legally privileged content: these categories each carry different risk profiles and need different rules.
Output responsibility — who is accountable for AI-assisted work, how outputs should be reviewed before use, and when AI-generated content requires disclosure.
Most organisations have gaps across all three. Knowing where yours are largest shapes everything that follows — the scope of the audit, the specificity of the tool tiers, the emphasis of the data rules. Before any drafting, you need to understand the real landscape. Which is why the next step is not writing. It is observing. For the broader governance layer that sits around the policy, AI content accountability covers how to manage risk, quality, and ownership in AI-assisted workflows.
Audit What Is Already Happening
A policy written in isolation from how employees are already using AI will miss the most important risks and create rules that bear no relationship to actual behaviour. Before drafting, find out what tools employees are currently using, in which teams, for which tasks. A short survey or a series of team conversations will reveal the landscape quickly. What you are looking for is the gap between what you think is happening and what is actually happening.
In most organisations, that gap is significant — and it runs in a predictable direction. Employees are not typically entering the most obviously sensitive data into AI tools. They are entering the materials they use every day: meeting notes, customer queries, support tickets, draft proposals. The risk is not malicious intent. It is routine work colliding with the wrong tool.
The audit has two functions: it gives you accurate inputs for policy design, and it gives you a baseline against which you can measure whether the policy, once published, actually changes behaviour. A policy you cannot measure is a policy you cannot improve. Once you know what is actually happening, you can start turning that picture into a coherent set of rules — beginning with how tools are categorised.
Define Your Tool Categories
One of the most practically useful things an AI acceptable use policy can do is give employees a clear, simple framework for understanding which tools are approved for which purposes. The most workable approach is a tiered structure with three levels.
A clear tiered structure removes the ambiguity that leads to poor individual decisions. Most employees make reasonable choices when they understand the reasoning behind the rules. Most make poor ones when they have to choose between a vague prohibition and a practical problem that AI could solve in thirty seconds. But knowing which tools are approved is only half the picture. The other half is knowing what information can go into them.
Write the Data Handling Rules
The data handling section is where legal and compliance risk is most concentrated, and where vague language causes the most damage. Telling employees "do not enter sensitive information" sounds like a rule. It is not one. Employees do not always know what qualifies as sensitive, and the categories that matter differ by industry, role, and jurisdiction. The policy needs to name them.
LayerX Security's 2025 enterprise report found that nearly 40% of files pasted into AI tools contain PII or payment card data, and 22% of pasted text includes sensitive regulatory information. Most employees would not instinctively classify either category as off-limits.
The policy needs to name categories explicitly. At minimum, most organisations should prohibit entry into any non-approved AI system of:
For organisations in regulated sectors, such as financial services, healthcare, and legal, the list will be longer and the policy will need to interact with existing data classification frameworks rather than replace them.
When writing this section, the most effective format is a short table or list with three columns: the data category, a plain-language description of what it includes, and the rule that applies. "Personal data — names, contact details, performance records — must not be entered into any AI system without an active DPA" is enforceable. "Be careful with sensitive data" is not. The goal is a section that an employee can read in two minutes and use to make a decision in five seconds. Once you have governed what goes in, the policy needs to address what comes out.
The data handling rules are also where GDPR intersects most directly with AI governance. Any AI tool that processes personal data on behalf of the organisation requires a data processing agreement. The policy should make clear that using an AI tool for tasks involving personal data without a DPA in place is not permitted. US federal agencies issued 59 new AI-related regulations in 2024 alone, more than double the previous year, and the enforcement trajectory makes data handling the section most likely to be tested first.
If an employee pastes customer data into a consumer AI tool with no DPA, your organisation may have committed a GDPR data breach. The test is not whether anything went wrong. It is whether personal data was processed by an unauthorised third party. The policy needs to name this risk explicitly so employees understand the stakes, not just the rule.
Set Expectations Around Output and Accountability
One of the most consequential gaps in most organisations' current AI practice is the absence of any clear standard for how AI-generated or AI-assisted outputs should be reviewed before use. Employees who use AI to draft content, generate analysis, or produce summaries often apply less scrutiny than they would to their own work. Not out of carelessness, but because the output looks finished.
WIRED's internal AI editorial policy, published in early 2023, addressed this directly: undisclosed AI-generated text would be treated as plagiarism. It was a deliberately unambiguous standard that made accountability concrete rather than aspirational. Most organisations need the equivalent logic applied to their own context.
The policy should establish, without ambiguity, that responsibility for any AI-assisted output sits with the employee who uses it. This means the employee is expected to review outputs for accuracy, appropriateness, and fit before using them. "AI generated it" is not a defence for an error that reaches a customer, a regulator, or a decision-maker.
Where disclosure of AI involvement is required, in certain regulatory contexts or as a matter of internal quality standards, the policy should specify when and how that disclosure is made. Vague commitments to transparency are not enforceable. Named conditions are. And for organisations operating in the EU, transparency requirements are not just an internal choice — they are starting to be a legal one.
Address the EU AI Act and Regulatory Context
For organisations operating in the EU, or with EU employees or customers, an AI acceptable use policy does not exist in a regulatory vacuum. The EU AI Act imposes specific obligations on deployers of high-risk AI systems — and a well-written acceptable use policy creates the operational foundation for meeting several of them directly.
Practically, this means the policy needs to address four things the Act requires of deployers. First, that AI systems are used within their intended purpose as documented by the provider — your policy's tool authorisation section should reference this explicitly. Second, that human oversight is assigned and operable — your policy should name who is responsible for oversight of each high-risk system, not just state that oversight is required. Third, that workers are notified before AI systems affecting them are deployed — your policy should include a commitment to this notification, not leave it as an informal practice. Fourth, that the organisation can demonstrate compliance — which means training records, incident logs, and documentation of the policy itself need to be retained.
The Act also bans certain AI uses outright — biometric surveillance and social scoring without oversight among them — so the policy's "not approved" tier should explicitly prohibit any use that falls into these categories, not just tools that have been informally disfavoured. Get legal review before publication if your organisation uses AI in recruitment, credit, benefits, or other Annex III high-risk categories. The full picture of what deployers must do is in the EU AI Act deployer obligations guide, and the training requirements are covered in what the EU AI Act means for your team's training. A policy that survives legal review is necessary. A policy that employees have never heard of is not.
Violations of high-risk AI obligations under the Act carry fines of up to €15 million or 3% of total worldwide annual turnover, whichever is higher. An acceptable use policy that is inconsistent with Article 26 obligations does not reduce that exposure. It just makes the inconsistency harder to spot before enforcement.
Communicate It — and Train People to Follow It
A policy that is published on an intranet and never mentioned again will not change behaviour. Communication and training are not optional extras on top of the policy. They are the mechanism by which the policy has any effect at all.
Leadership should introduce the policy directly, explaining why it exists and what problem it solves. A policy that arrives without context reads as restriction. A policy introduced with reasoning reads as guidance. The difference in employee reception is significant. One-third of executives believe their company tracks all AI usage, but only 9% actually have working governance systems. That gap widens when employees receive rules without the reasoning behind them and route around them accordingly.
Training should follow, and it needs to be specific to the policy rather than a generic AI awareness session. Employees should leave knowing exactly which tools they can use for which tasks, what they should never enter into an AI system, and what to do if they are uncertain. If they cannot answer those three questions, the training has not done its job. The distinction between awareness and the applied capability needed to follow a policy in real decisions is covered in AI upskilling vs. AI awareness training. And even the best-communicated policy will need revisiting — because the environment it governs will not stay still.
A financial services firm rolls out its AI acceptable use policy with a thirty-minute all-hands where the CEO explains specifically why the policy exists, citing two recent incidents where client data was entered into non-approved tools, and what will change. Every manager receives a one-pager mapping the policy's rules to their team's actual tool usage. Training is delivered in the following two weeks, role by role, covering the specific tools each function uses and the specific rules that apply to them.
Six months later, the audit baseline shows a 60% reduction in sensitive data entering non-approved AI tools. The policy did not cause that reduction. The communication and training caused it. The policy provided the framework they could work from.
Build In a Review Cadence
An AI acceptable use policy written today and left unchanged through next year will be out of date long before anyone notices. The tools available to employees change. The regulatory environment changes. The ways employees are actually using AI change.
AI usage policies have become the new norm as businesses across industries adopt various AI technologies, and many companies are already revisiting and updating their policies to become more permissive while meeting new transparency requirements. A policy that was appropriately cautious in 2023 may be unnecessarily restrictive by 2025, or inadequately specific by 2026. Both directions carry costs.
Build a review cycle into the policy itself: six months for the first review after publication, then annually unless a significant regulatory or tooling change requires an earlier update. Assign ownership clearly. Someone needs to be responsible for triggering the review, gathering input from legal, HR, and operational leads, and publishing the revised version.
The first version of the policy does not need to be perfect. It needs to be accurate, specific, and communicated. A living document that is regularly reviewed and updated is more valuable than an exhaustive one that is never touched again.
Savia's AI governance and AI literacy learning paths give employees the practical judgment to apply your acceptable use policy in real decisions, not just read it once and forget it.