---
title: The 1-page UK AI policy template that keeps the ICO off your back
description: "Most UK AI policy templates are eight pages of legalese nobody reads. After 120+ AI projects across 15+ industries, I've watched companies fail audits because the policy lived in a SharePoint folder no employee opened. This is the 1-page version that actually works. It anchors in ICO guidance and UK GDPR, covers scope, prohibited uses, data classes, human oversight, vendor approval and incident reporting. You get the working template, the clauses to copy, and the four ICO principles every policy needs to reference."
canonical: https://richardbatt.com/blog/uk-ai-policy-1-page-template-ico
date: 2026-05-05
author: Richard Batt
tags: [AI Governance, ICO, UK GDPR, AI Policy]
type: blog_post
---

# The 1-page UK AI policy template that keeps the ICO off your back

_Most UK AI policy templates are eight pages of legalese nobody reads. After 120+ AI projects across 15+ industries, I've watched companies fail audits because the policy lived in a SharePoint folder no employee opened. This is the 1-page version that actually works. It anchors in ICO guidance and UK GDPR, covers scope, prohibited uses, data classes, human oversight, vendor approval and incident reporting. You get the working template, the clauses to copy, and the four ICO principles every policy needs to reference._

**Richard Batt** — AI implementation specialist. 120+ projects across 15+ industries, serving SMBs (5-200 employees) worldwide from Middlesbrough, UK (working globally). Contact: richard@richardbatt.com · https://richardbatt.com

A UK AI policy worth the paper it's printed on fits on one page. It tells your team six things in plain English: which tools are allowed, what's prohibited, how to classify data going into AI, who signs off on new vendors, when a human has to review the output, and what to do when something goes wrong. Anything longer ends up in a folder no employee opens. And anything shorter, the Information Commissioner's Office (ICO) will find a hole in it.

I've sat through this conversation in 14 different SMBs over the last 18 months. The pattern is always the same. Someone in the leadership team panics about ChatGPT, asks a solicitor for an AI policy, gets back an eight-page document written in the language of GDPR Recital 71, and quietly shelves it. Three months later, a junior team member pastes a customer list into a free chatbot and the company has a problem nobody has the policy to handle.

So after 120+ AI implementations across 15+ industries, I've ended up writing essentially the same one-page document for nearly every client. This is the document, with my reasoning behind each clause, plus the ICO principles each one has to land on.

**The short version**

- A UK SMB AI policy fits on one page if it covers six things: scope, prohibited uses, data classifications, human oversight, vendor approval, incident reporting.
- The ICO's five-principle framework (safety and security; transparency; fairness; accountability; contestability) is the spine your policy must rest on.
- Eight-page legal templates fail because nobody reads them. The 1-page version actually gets used by line managers on Tuesday mornings.
- A working policy lives next to the AI tool, not in a SharePoint folder. Embed the rules in the workflow.
- Most ICO action against UK SMBs starts with a Data Subject Access Request, not a regulator audit. Plan for the request, not the audit.

## What a UK AI policy actually has to do

A UK AI policy is a working document. It tells your team how to use artificial intelligence tools at work, who signs them off, what data can and can't go into them, and what happens when something goes wrong. It sits between a privacy notice and an operations manual, and it carries legal weight under UK GDPR and the Data Protection Act 2018.

The ICO has been clear on what they expect. In their 2024 and 2025 AI guidance updates, the regulator listed five principles: safety and security; transparency and explainability; fairness; accountability and governance; contestability and redress. Every clause in a UK AI policy needs to map back to one of those five. If a clause doesn't, it's filler.

The policy isn't there to make the ICO happy in the abstract. It's there for the day someone makes a Data Subject Access Request (DSAR), or the day a customer complaint escalates, or the day a journalist phones up about an AI mistake. On those days, the policy is the document you hand to your lawyer, your insurer, or the regulator. If it's eight pages of generic GDPR text, it tells them nothing about how your business actually operates. If it's the 1-page version, it shows controls that match the work.

## Why the 8-page template fails every time

I've been brought in to clean up the aftermath of three SMBs with long AI policies. In all three cases the policy existed, was signed by the directors, and had been completely ignored by the team that needed it.

The pattern: the legal team adapts a generic template, padded with definitions, references to Article 22 of UK General Data Protection Regulation (UK GDPR), descriptions of what AI is, and aspirational language about "ethical use." None of that survives contact with a Tuesday morning. A salesperson on a deadline pastes a draft contract into ChatGPT to summarise it. A marketing assistant drops a customer email list into Claude to draft follow-ups. The policy doesn't get consulted because nobody knows where it is or what it actually says.

The 1-page version solves three problems at once. It gets read, because it's short. It gets remembered, because it's specific. And it gives line managers a single sheet to point at when an employee asks "can I use AI for this?" That last point is the one that actually keeps the ICO away from your business.

## The 1-page UK AI policy template

Here's the working version. Copy it, change the company name, put your data classification scheme in section 3, and you're 80% of the way there.

> **[Company name] AI usage policy** > > **1. Scope.** This policy applies to every employee, contractor, and supplier using AI tools for [Company] work. AI tools include large language models (ChatGPT, Claude, Gemini, Copilot), image generators, voice agents, automated decision systems, and any third-party SaaS feature labelled "AI" or "intelligent." > > **2. Prohibited uses.** Do not use AI to: (a) make automated decisions about hiring or firing or pay or performance reviews or customer pricing without human review; (b) generate content that will be sent externally without a named human reviewing it first; (c) process special category personal data such as health or biometric or racial or religious or sexual orientation or trade union or political data; (d) bypass an existing security or compliance control. > > **3. Data classifications.** Personal data goes into approved tools only. Special category data goes into no AI tool. Confidential commercial data (contracts, pricing, customer lists, financial records) goes into approved tools only. Public data is fine in any tool. The approved tools list is in [location], maintained by [role]. If it's not on the list, don't paste data into it. > > **4. Human oversight.** Every AI output that affects a customer, an employee, or a financial decision must be reviewed by a named person before it goes out. The reviewer is recorded. AI-generated content sent externally without review is a disciplinary matter. > > **5. Vendor approval.** New AI tools, including free ones, get approved by [role] before use. Approval requires: a data processing agreement (DPA), confirmation of where the data is stored (UK or EU preferred, US under Data Bridge acceptable), and confirmation the vendor will not train its models on your inputs unless explicitly opted in. > > **6. Incident reporting.** If an AI tool produces a wrong or harmful or biased or confidential output that has been seen by anyone outside the immediate user, report it to [role] within 24 hours. We document the incident, review it, and decide whether the ICO needs to be notified. > > Signed: [director], [date]. Reviewed: [date + 12 months].

That's the whole thing. Print it on one side of A4. Put it next to the laptop of every person likely to use AI. Make it part of induction. Review it once a year.

## How each clause maps to ICO principles

The policy is short but it does the work. Here's how each clause anchors in the ICO framework, so when an auditor asks "where's your evidence of accountability and governance?" you have an answer.

| Clause | ICO principle covered | Why it matters |
| --- | --- | --- |
| 1. Scope | Accountability and governance | You can't govern what you haven't named. The clause names every category of AI tool, including "anything labelled intelligent." |
| 2. Prohibited uses | Fairness, contestability and redress | Article 22 of UK GDPR restricts solely automated decisions with significant effects. This clause builds that boundary in plain English. |
| 3. Data classifications | Safety, security and robustness | Special category data has the strictest rules under UK GDPR. A blanket ban is simpler than trying to police it case by case. |
| 4. Human oversight | Transparency and explainability, contestability | Every external AI output is human-reviewed. That's the clause your lawyer will quote when a customer complains. |
| 5. Vendor approval | Accountability and governance | DPAs and data residency are where the regulator looks first. A "free tool" without a DPA is a personal data breach waiting to be filed. |
| 6. Incident reporting | All five | A 24-hour internal reporting window keeps you ahead of the 72-hour external breach notification rule, if it gets that far. |

## The five things the policy doesn't try to do

This is where most UK AI policies go wrong. They try to do work the policy isn't suited for, and end up doing none of it well. Five things to keep out:

1. Definitions of what AI is. The legal definition is moving and the boundary between "AI" and "automation" is fuzzy. Don't define it. List the categories of tool you mean.
2. A complete list of approved tools. That belongs in a separate live document, owned by IT or operations, updated when new tools are added.
3. Detailed prompt-engineering guidance. That's a training document, not a policy. Put it in the AI Ops Vault or your internal wiki.
4. Reproducing UK GDPR text. The Data Protection Act 2018 is the law. Your policy is the operational layer on top of it. Don't paraphrase law.
5. Aspirational ethics statements. "We commit to using AI responsibly" tells the ICO nothing. The clauses above are how you actually use it responsibly.

If a draft of the policy is creeping past one page, look at which of those five it's drifting into. Usually it's number one or number five.

## A worked example: the property management firm

Last month I wrote this policy for a 30-person property management firm running 12 sites in the South East. Their existing template was 11 pages long and had been adapted by a legal firm from a US SaaS company. Section 4 referenced California Consumer Privacy Act provisions that have no force in the UK. Nobody on the lettings team had read it.

We compressed it to one page in a 90-minute working session with the operations director. The substantive change: their approved tools list went from "we permit AI use subject to review" to a named list of three tools (ChatGPT Teams, Claude for Work, Microsoft Copilot inside Microsoft 365) with an explicit ban on free chatbots.

Two weeks later, a maintenance coordinator asked her line manager whether she could use a free Hindi-translation AI tool to read tenant messages. The line manager said no, point them at section 5, get it approved through IT first. That conversation didn't happen under the 11-page policy because nobody knew the policy said anything about it.

That's the only test that matters. The policy works if line managers can use it on a Tuesday morning to answer a real question.

## The 30-day rollout plan

Writing the policy is the easy part. Getting it landed is the work. Here's the 30-day version that I've now run with seven different SMBs.

**Week 1: name the inventory.** List every AI tool already in use across the company. Most leadership teams underestimate by 4 to 6 tools. Ask each department to write down what they use. Include the free chatbots people pull up on personal phones during work.

**Week 2: draft the policy.** Use the template above. Adjust the prohibited uses to match your sector (estate agents, accountancy, healthcare, legal each need a sector-specific restriction). Get sign-off from a director.

**Week 3: announce it.** Twenty-minute all-hands. Walk through the six clauses. Take questions. The questions tell you where the policy will fail. Fix the language before it goes live.

**Week 4: embed it.** Put the policy next to the tools. A printed sheet beside each laptop, or a banner inside the approved AI tool's workspace, or both. Add it to onboarding. Schedule the 12-month review.

The whole process costs about 6 to 10 hours of a senior person's time. Most of the SMBs I've worked with were quoted £8,000 to £20,000 for an external policy review and got back something less useful than the version above.

## Frequently asked questions

**Do UK SMBs legally need a written AI policy?**

There's no UK law that says you must have a document titled "AI policy." But UK GDPR Article 24 requires you to demonstrate accountability for how personal data is processed, and Article 35 requires Data Protection Impact Assessments for high-risk processing. If you're using AI on personal data, the ICO will expect documented governance. A written policy is the simplest way to show it.

**Does the ICO actually audit small businesses for AI use?**

Direct AI audits of SMBs are rare. Investigations triggered by complaints, DSARs, or breach reports are common. In 2024, the ICO handled over 39,000 data protection complaints, a portion of which involved automated systems. A written policy is what you produce when one of those investigations lands.

**Is ChatGPT Free safe for UK business use?**

Treat the consumer ChatGPT as a personal data risk by default. OpenAI's free tier may use inputs to train models unless you opt out, and the terms of service do not include the kind of data processing agreement a UK business needs. ChatGPT Teams or Enterprise, Claude for Work, and Microsoft Copilot inside Microsoft 365 are the safer SMB defaults.

**What about Article 22 of UK GDPR and automated decisions?**

Article 22 restricts solely automated decisions that produce legal or similarly significant effects on a person. If you let an AI tool decide who gets hired, fired, or charged a particular price with no human review, you're inside the Article 22 zone. Adding a named human reviewer to those decisions takes you out of it. That's why clause 4 of the template is non-negotiable.

**Do we need a Data Protection Impact Assessment (DPIA) for AI?**

If you're processing personal data with AI in a way that's likely to be high risk (large-scale profiling, special category data, monitoring of public spaces, decisions about employment), yes. The ICO's view is clear: a DPIA is the right tool for documenting AI-specific risks. The 1-page policy doesn't replace a DPIA. It sits alongside it.

**How often should the policy be reviewed?**

Once a year, plus any time you adopt a new category of AI tool (voice agents, image generators, autonomous agents). Most SMBs I work with run a 30-minute review with the directors every 12 months. If nothing has changed, the review document is one paragraph long. That's fine.

## The unhelpful truth about AI compliance

Most of the SMB AI compliance content online is written by lawyers selling AI policy reviews and by SaaS vendors selling compliance dashboards. Both groups have an interest in making the rules sound complicated. But the rules aren't actually complicated. The ICO has published consistent guidance, the principles are public, and the clauses you need fit on a single sheet of A4.

The danger isn't that the rules are too hard. The danger is that you spend three months on a 14-page document that nobody reads, and a junior team member pastes a customer list into a free chatbot anyway. The 1-page policy gets read. That's its only real competitive advantage over the alternative.

If you want the rest of the AI governance toolkit, the AI Ops Vault has the full clause library, the DPIA templates, and the vendor due-diligence checklist I've used across 120+ implementations. It's at https://richardbatt.co.uk/vault.

If you'd rather have someone walk through your specific business, list your actual tools, and write the policy alongside you in a working session, the AI Roadmap is the fastest path. It's at https://richardbatt.co.uk/roadmap.

The policy itself is short. The work is in landing it. Start with the template, get it on one page, get it signed, get it next to the laptops. The rest follows.

---

## More about Richard Batt

Richard Batt is an AI implementation specialist who helps businesses deploy working AI automation in days, not months. 120+ projects across 15+ industries.

### Key pages

- [Home](https://richardbatt.com/)
- [About Richard](https://richardbatt.com/about)
- [Blog](https://richardbatt.com/blog)
- [Contact](https://richardbatt.com/contact)
- [Subscribe](https://richardbatt.com/subscribe)

### Contact

- Email: richard@richardbatt.com
- Location: Middlesbrough, UK (working globally)
- Website: https://richardbatt.com