---
title: "UK GDPR and AI, the three ICO-flagged risks every SMB ignores until they're audited"
description: "The Information Commissioner's Office has been flagging the same three AI-and-data-protection risks for two years, and most UK small and mid-sized businesses still treat them as a future problem. After 120 AI projects across 15 industries, the pattern is depressingly consistent. The audit arrives, the risks were known, and the remediation cost is three times what a 30-person firm would have spent fixing them in advance. Here are the three risks, the four-step audit any SMB can run this month, and the FAQ on what counts and what doesn't."
canonical: https://richardbatt.com/blog/uk-gdpr-ai-ico-three-risks-smb
date: 2026-05-05
author: Richard Batt
tags: [UK GDPR, ICO, AI Compliance, AI Risk]
type: blog_post
---

# UK GDPR and AI, the three ICO-flagged risks every SMB ignores until they're audited

_The Information Commissioner's Office has been flagging the same three AI-and-data-protection risks for two years, and most UK small and mid-sized businesses still treat them as a future problem. After 120 AI projects across 15 industries, the pattern is depressingly consistent. The audit arrives, the risks were known, and the remediation cost is three times what a 30-person firm would have spent fixing them in advance. Here are the three risks, the four-step audit any SMB can run this month, and the FAQ on what counts and what doesn't._

**Richard Batt** — AI implementation specialist. 120+ projects across 15+ industries, serving SMBs (5-200 employees) worldwide from Middlesbrough, UK (working globally). Contact: richard@richardbatt.com · https://richardbatt.com

The Information Commissioner's Office has published guidance on AI and data protection three times since 2023. Each round has named the same three risks. Each round has made it slightly harder for a UK SMB to claim ignorance after the fact. So when the ICO audit arrives at a 30-person services firm in 2026, the practitioner reality is that the firm has been told what to fix for three years.

After 120 AI projects across 15 industries, I've watched a fair number of SMBs get there. The audit arrives. The risks were known. The remediation costs three times what a fix-in-advance would have cost. And the senior team is left explaining to the board why a problem that was on the agenda in 2024 wasn't dealt with by 2026.

This post is the practitioner version. The three ICO-flagged risks every UK SMB needs to know, the four-step audit any 30-person firm can run this month, and an FAQ on what counts and what doesn't. None of this is legal advice. All of it is what I work through with clients before the audit letter lands.

**TL;DR**

- Risk one: feeding personal data into a third-party AI tool without an established lawful basis for processing.
- Risk two: making automated decisions that significantly affect a person without a clear human-review mechanism (UK GDPR Article 22).
- Risk three: handling data subject access requests when AI summaries, model logs, and synthetic outputs sit alongside the original records.
- The four-step audit takes a 30-person firm around eight hours of senior time and surfaces 80 percent of the practical exposure.
- The DPIA (data protection impact assessment) plus a one-page AI policy resolves most of the gap before the ICO ever calls.

## What "UK GDPR and AI" actually covers in 2026

Define the term, because the press uses it loosely. UK GDPR (the UK's post-Brexit version of the EU General Data Protection Regulation) governs how UK organisations process personal data, with the ICO as the regulator. When an SMB uses an AI tool that touches personal data (a customer's name, a staff member's appraisal, a candidate's CV, a patient identifier) the rules apply, regardless of whether the AI is a chatbot, a model running on a vendor's cloud, or a fine-tuned model running on the firm's own infrastructure.

The ICO's published guidance covers five principles relevant to AI: lawfulness, transparency, fairness, accountability, and contestability. Most UK SMBs reading this post are probably compliant on lawfulness and transparency for their non-AI processing. The gap is usually in how those same principles get reapplied when an AI tool enters the workflow.

The three risks below are the ones the ICO has named most consistently, and the ones I see SMBs miss most often.

## Risk one: personal data into third-party AI without lawful basis

The ICO has been clear since 2023. Pasting customer data, staff data, or candidate data into a third-party AI tool is processing under UK GDPR, and you need a lawful basis for it. For most SMBs, the lawful basis they've cited for their existing processing (legitimate interest, contract, consent) was written before they had AI tools, and it doesn't cover the new processing.

The practical version of this risk: a sales rep pastes a candidate's CV into ChatGPT to summarise it. A customer service team uses an AI summariser on inbound emails containing personal complaints. A finance person feeds a creditor list into a free AI tool to draft a chasing letter. None of those uses had a documented lawful basis when they started. None of those uses got a DPIA. And in most cases, the third-party AI vendor's terms allow them to use the input as training data, which means the personal data has now left the controller's reasonable control.

The fix is straightforward and ignored. First, decide which AI tools your firm uses on personal data and which it doesn't. Second, check the vendor's terms for whether inputs are used for training. Third, write or update your privacy notice and your records of processing to cover the AI use. Fourth, where the lawful basis is shaky, restrict the AI tool to non-personal data use or move the workflow onto a private model where the inputs do not train the third party.

A 30-person professional services firm I worked with last quarter went through this exercise in eight hours of senior time. They found four AI tools in active use that nobody had logged. Two were fine. One needed a privacy-notice update. The fourth was switched off and replaced with a private model wired into their existing M365 tenancy. Total fix cost was around £4,000. The audit cost if they hadn't done that work would have been considerably more.

## Risk two: automated decisions without human review (Article 22)

UK GDPR Article 22 gives a person the right not to be subject to a decision based solely on automated processing that produces legal or similarly significant effects on them. The ICO has been explicit that hiring decisions, credit decisions, performance-management decisions, and access decisions all qualify as significant effects.

The risk for SMBs is rarely deliberate. It's that an AI tool got into the workflow and the human review step quietly disappeared. A recruitment tool now scores candidates and only the top-ranked CVs reach the hiring manager. An AI scheduling tool now assigns shifts based on past patterns, and the result is that the same staff get pushed onto the unpopular slots. A performance-management tool now drafts review summaries from email and ticket history, and the manager ticks "approve" without reading.

The Article 22 standard isn't that the AI suggests the decision. It's that the decision is based solely on automated processing. So if your AI scores 200 candidates and the hiring manager looks at five, that isn't solely automated. But if 195 were dismissed without review by a human who could overturn the decision, you're inside Article 22 territory.

The fix has three parts. First, identify every workflow where an AI is making or strongly shaping a decision that affects a person. Second, design in a human-review step that gets recorded for each decision rather than just signed off in policy. Third, write a clear way for the affected person to contest the decision and have a human re-make it. The ICO calls this contestability. It's the easiest principle to forget and the easiest one to demonstrate compliance with if you've designed it in.

A 22-person recruitment firm I worked with had an AI candidate-scoring tool running for nine months before they spotted the Article 22 exposure. The fix was a 30-minute change to the workflow: the tool now gives the recruiter a longlist with the AI score visible, and every candidate within a defined band gets a human review before rejection. Recorded in their applicant tracking system. Documented in their privacy notice. Total time to remediate was around 12 hours. The risk before remediation was a class-of-claimant complaint, not a single complainant.

## Risk three: data subject access requests when AI outputs exist

Under UK GDPR Article 15, a person can request a copy of all personal data an organisation holds about them. The original concept assumed the data was in defined records: a customer file, an HR file, an inbox folder. AI tools complicate the picture because the data now also exists in summaries, in model logs, in retrieval indexes, and sometimes in synthetic outputs that referenced the person but aren't stored as personal data in the original sense.

The ICO has flagged this risk repeatedly. A subject access request (SAR) lands. The data controller does the obvious search across email, CRM (customer relationship management), and HR systems. The AI summariser outputs aren't searched because nobody knew where they were stored. The retrieval index against a private LLM (large language model) isn't searched because no policy required it. The AI policy didn't say SARs should reach the AI tooling. The customer or ex-employee then complains that not everything was returned.

The fix sits across documentation and tooling. First, the records of processing under UK GDPR should explicitly list any AI tool that stores or generates personal data. Second, the SAR procedure should include a step for searching AI tool outputs, summaries, and indexes. Third, where an AI tool generates personal data (a draft about a person, a profile summary, a review note), that output should either be stored in a system the SAR procedure already covers, or the AI tool should be configured not to retain the output past the immediate workflow.

A 30-person services firm I supported last year had a SAR for a former contractor land in their inbox. The standard search returned 14 documents. A second pass that included the AI summariser tool returned a further nine, including some with material the firm hadn't realised was being retained. The SAR was answered fully and the firm avoided a complaint. The lesson was that the SAR procedure had to be rewritten to include the AI tooling. They did that work the next month.

## How the three risks compare

| Risk | Trigger | Typical SMB exposure | Cost to fix in advance | Cost if found at audit |
| --- | --- | --- | --- | --- |
| Personal data into third-party AI | Pasted into ChatGPT, summariser, free tool | Privacy notice, ROPA gaps, DPIA missing | £3k to £6k | £15k to £40k plus reputational |
| Article 22 automated decisions | AI shaping a hire, a credit decision, or a shift schedule | Missing or undocumented human review | £2k to £5k | £20k+ plus claimant risk |
| SARs and AI outputs | Subject access request lands | Incomplete return, retained AI summaries | £1k to £4k | £10k+ plus complaint to ICO |

Across the three, the arithmetic is consistent. Fixing in advance is cheap, and fixing at audit is not. The time pressure at audit is also unforgiving. A regulator gives you a defined response window. You don't get to pick the week.

## The four-step audit any 30-person SMB can run this month

This is the practitioner version of the audit. It's not a substitute for legal advice or a formal DPIA, and it doesn't cover every edge case. But it surfaces around 80 percent of the practical exposure for a typical 30-person SMB and it costs eight hours of senior time.

**Step one: list every AI tool in active use.** Walk the team, ask three questions: which AI tools do you currently use, what data do you put into them, and does the data include personal information about a customer, a staff member, or a candidate? Make the list complete. The shadow-AI use is the source of half the exposure, and it never appears on the official IT list.

**Step two: classify each tool by data flow.** For each tool, write down whether the inputs leave your systems, where they go, what the vendor's terms say about training and retention, and whether you have a contract that overrides those terms. This single table is what your DPIA will be built on.

**Step three: identify any decisions that affect a person.** Map every workflow where the AI is making or strongly shaping a decision for an individual. Hiring is the obvious one. Performance assessment, shift scheduling, credit decisions, access controls, and individual pricing all belong on the list. For each one, document the human-review step. If the human-review step isn't there or isn't recorded, design one in this fortnight.

**Step four: update the SAR procedure and the privacy notice.** Add the AI tools to your records of processing. Update the privacy notice to disclose the AI processing in plain English. Rewrite the SAR procedure to cover AI summaries, indexes, and outputs. Test the new procedure with a fake SAR before you have to run a real one.

A 30-person SMB running this audit in a single month, with one senior person owning it for eight hours and a second person validating for two, will catch most of what an ICO audit would catch. It's not a substitute for proper compliance work on a complex AI deployment. It is the floor that should already exist by the time anyone calls themselves AI-mature.

## What an SMB should not do

Don't wait for the ICO to write to you. Audits are routine and the audit doesn't have to come from a complaint to land on your desk. The ICO publishes a sectoral programme each year and it samples randomly within sectors.

Don't assume your existing data protection officer or external DPO has covered the AI tools. Many haven't. Many were retained before the AI tools were in active use. The DPO's scope was probably your CRM, your HR system, and your finance stack. The shadow-AI tools are a separate workstream and they need an explicit instruction.

Don't conflate "vendor has good security" with "we have lawful basis for processing this data through them." The vendor's security posture is necessary. It isn't sufficient. The lawful basis sits with you as the data controller.

## How my clients sequence this work

I work with this end of compliance regularly. The order that holds is consistent across firms. Shadow-AI inventory comes first, because you can't fix what you can't see. DPIA on the riskiest tool comes second. AI policy comes third. SAR procedure update is fourth, and an ongoing quarterly review is fifth.

The AI Ops Vault at richardbatt.co.uk/vault has the DPIA template and the AI policy template I use with clients. The AI Roadmap audit at richardbatt.co.uk/roadmap covers vendor risk and compliance posture as part of the assessment, alongside the operational scoping. If you want a structured approach that surfaces the compliance gaps and the operational opportunities at the same time, the Roadmap is the fastest route.

## Frequently asked questions

**Does using a free AI tool for personal data put me out of compliance immediately?**

It depends on the lawful basis, the vendor's terms, and whether you've informed the data subject. Many free tools include training-on-input language in their terms, which makes the data leak into the vendor's model and creates a hard problem on lawful basis. The simple test is: would you be comfortable telling the data subject that their data is now potentially in a third-party model? If not, you're probably out of compliance.

**What's a DPIA and when do I need one?**

A data protection impact assessment is the documented analysis of a processing activity's risks to people, with the mitigations in place. Under UK GDPR you need one for any new processing that's likely to result in a high risk to individuals. Most AI processing on personal data crosses the threshold. The ICO publishes a template that takes a small firm around four to six hours to fill in properly.

**If we use Microsoft Copilot or Google Workspace AI, do we still need to do this work?**

Yes. The platform vendor's contract typically gives you better terms than a free tool, and the data flow stays inside your tenancy. But your records of processing still need to disclose the AI processing, your privacy notice still needs updating, and your Article 22 risk still applies if the AI is making decisions. The platform contract is the floor. The work I've described is the compliance walls and roof.

**How does the ICO actually find out about my AI use?**

Three ways. A complaint from a data subject (a candidate, a staff member, a customer) is the most common. A breach notification to the ICO under their breach reporting duty is the second. And a sectoral audit programme is the third. Smaller firms tend to assume the third is the only risk. The first two are more frequent in practice.

**Are there fines for getting this wrong?**

UK GDPR fines can theoretically reach 4 percent of global turnover or £17.5 million. In practice, ICO action against SMBs has more often taken the form of enforcement notices, undertakings, and required remedial work. The financial cost is usually in the remediation and the lost contracts that follow a published enforcement action, not in the headline fine itself.

## What to do this week

Run step one of the audit. Walk the team, ask the three questions, write the list. That single hour will tell you whether you're ahead or behind. Most 30-person SMBs are behind. The good news is that the practical fix is finite, sub-£10,000 for most cases, and it pays for itself the first time a SAR or an audit lands.

After 120 AI projects across 15 industries, the pattern is consistent. The risks aren't new. The ICO has been telling everyone for three years. And the firms that have done the eight-hour audit in advance are the firms that don't get the panicked board meeting in month 14.

---

## More about Richard Batt

Richard Batt is an AI implementation specialist who helps businesses deploy working AI automation in days, not months. 120+ projects across 15+ industries.

### Key pages

- [Home](https://richardbatt.com/)
- [About Richard](https://richardbatt.com/about)
- [Blog](https://richardbatt.com/blog)
- [Contact](https://richardbatt.com/contact)
- [Subscribe](https://richardbatt.com/subscribe)

### Contact

- Email: richard@richardbatt.com
- Location: Middlesbrough, UK (working globally)
- Website: https://richardbatt.com