Richard Batt |
How to Write an AI Usage Policy Your Team Will Actually Follow
Tags: AI Governance, Operations
I walked into a discovery meeting with a financial services company, and during a casual conversation with their operations manager, she mentioned that last month someone had pasted a spreadsheet containing client account balances and trading history into ChatGPT. Not to train it. Not deliberately. Just... to help with a data analysis task. There was no policy against it. There was no process for vetting tools. There wasn't even a conversation about what should or shouldn't happen with client data. The company had a 34-page information security policy that nobody read, but nothing that actually answered the one question people needed answered: is this tool safe to use with our data?
Key Takeaways
- Why Your Current Policy Approach Probably Isn't Working and what to do about it.
- Building Your Actual AI Policy: The Framework.
- What the Policy Actually Looks Like: A Real Example.
- Implementation: Getting Your Team to Actually Follow It.
- The Governance Mistakes I See Regularly, apply this before building anything.
That gap between what you need (actionable guidance) and what you have (compliance documents) is where most organisations fail with AI governance. I've worked through AI policy implementation on 120+ projects, and the pattern is consistent: the policies that work are short, specific, and focused on decision-making. The policies that fail are complete, theoretical, and written for a legal review that never happens.
Why Your Current Policy Approach Probably Isn't Working
Let me be direct about what I see in the wild. Most organisations take one of three approaches, and all three fail for different reasons.
The first approach is prohibition. You ban ChatGPT, Claude, Gemini, and anything that isn't on the approved tools list. This works until an employee actually needs to solve a problem, at which point they either ignore the policy or find workarounds you've never heard of. Prohibition works for genuinely dangerous things. It's useless for tools that employees are going to want to use regardless.
The second approach is laissez-faire. You allow people to use anything, with the assumption that they'll be sensible. This approach creates obvious risk: proprietary data walks out the door, client information gets contaminated into training datasets, and you have no visibility into what's actually happening. I worked with one software development company that discovered three years into their remote-work setup that developers were using public AI coding assistants to debug client-specific code. That's not malicious. That's just what happens when there's no clear guidance.
The third approach is bureaucracy. You create a 60-page AI policy that covers every conceivable scenario, requires approval workflows for using tools, and gets so complicated that the people who are supposed to follow it don't bother reading it. I've seen organisations where the AI policy is literally 7,000 words long and covers hypothetical scenarios that will never happen while missing obvious questions like "can I use ChatGPT to help with my job?"
The approach that actually works is pragmatic governance. You establish clear rules that people can apply without permission-seeking. You identify the specific risks that matter to your business. You make the policy short enough to read and specific enough to act on.
Building Your Actual AI Policy: The Framework
A usable AI policy has five components. It's not complete, complete policies are the enemy of compliance, but it covers the decisions people actually need to make.
First: Categorise your data by sensitivity. This isn't new; you probably already do this for information security. You have public data, internal data, sensitive data, and restricted data. The AI policy needs to map to these categories. Public data can go into any tool. Internal data might require vendor evaluation. Sensitive client information stays on approved systems. Restricted data, financial records, personal information, trading secrets, doesn't go into AI tools at all. This is the decision-tree that actually matters. If someone asks "can I use ChatGPT for this?" the answer is "what category of data?" and the policy tells them what to do based on the answer.
One insurance company I worked with implemented this framing and cut their policy from 45 pages to three pages of actual content. Everyone understood it. Compliance improved because people could actually remember and apply the rules.
Second: Define acceptable use by category. Not all AI tools are the same. ChatGPT, Claude, Gemini, Perplexity, and specialised domain tools have different capabilities, different data handling practices, and different risk profiles. Your policy should address the tools your team is actually going to use, not every tool in existence. For each category of data, specify which tools are approved. This isn't about limiting choice; it's about creating clear guidance so people don't have to guess.
The approved list should have a process for adding tools. When someone proposes a new tool, you evaluate it against your criteria, data handling practices, security certifications, terms of service and make a decision. This keeps your policy evergreen without requiring constant manual updates.
Third: Establish data handling rules." People need to know what to do with data before they put it into an AI tool. This means: do not paste entire datasets; redact personally identifiable information before sharing; do not share client names or company names unless the tool specifically permits it; if you're unsure, ask. These are practical rules that people can follow. They're not theoretical guidelines; they're actionable steps.
One specific rule I recommend: always assume that anything you put into a public AI tool could be visible to others or used for training purposes. This changes behaviour. People become much more careful about what they share when they operate under that assumption.
Fourth: Create a vendor evaluation framework. When your team identifies a new tool, they need a way to evaluate it. Does it have a privacy policy? What does it say about data retention? Does it offer enterprise terms? Is there a non-disclosure agreement available? Can you get security certifications? These questions need answering before you approve a tool. The framework should be simple enough that a non-technical person can apply it, but complete enough that it actually covers the risk areas that matter to your business.
I worked with a healthcare services company where this process revealed that one "approved" vendor had no privacy certification at all. The framework forced a conversation, and they either upgraded the vendor's terms or moved to an alternative. That's exactly what governance should do.
Fifth: Establish escalation and approval paths. Some decisions require centralised approval. If someone wants to use an AI tool with a new category of data, or if they're proposing a tool that hasn't been evaluated, there needs to be a path to get that approved. But this should be genuinely rare, most decisions should be clear from the policy itself. The escalation process should take days, not weeks. If your approval workflow requires three sign-offs, you're creating an incentive for people to ignore the policy and just use tools without asking.
What the Policy Actually Looks Like: A Real Example
Let me give you a concrete example of a policy section that works. This is adapted from a professional services firm I worked with recently:
Public Data and General Tasks: You can use ChatGPT, Claude, Gemini, or Perplexity for routine work that doesn't involve company data. Examples: drafting emails, creating presentation outlines, learning about industry trends, brainstorming ideas. No approval needed.
Internal Company Data: You can use approved tools (Claude via our API, or approved alternatives) for work involving internal operational data. Do not paste entire datasets. Do not include real client names, financial figures, or identifiable information. Redact before sharing. When in doubt, ask.
Client-Sensitive Data: Client information stays on approved systems. If you need AI assistance with client work, use only tools that operate under a Data Processing Agreement with our company. Currently these include Claude via our API and Microsoft Copilot Enterprise. Contact the compliance team if you need to use a different tool.
Restricted Data: Financial records, personal information, trading data, and proprietary technical details do not go into any AI tool. Period. If you have a use case that you think requires this, escalate to the CTO and compliance team.
That's it. That's a complete, functional AI policy. It answers the questions people actually ask. It creates clear guidance without requiring permission-seeking for 90% of scenarios. It identifies escalation paths when there's genuine uncertainty. This works because it's focused on decision-making, not on hypotheticals.
Implementation: Getting Your Team to Actually Follow It
A policy is useless if nobody reads it. So: make it short enough to read (maximum one page of actual rules, not counting examples). Distribute it in writing. Cover it in a 30-minute team meeting where you actually discuss real scenarios, "can I use this tool for X?" and walk through how the policy answers those questions. Get questions. Update based on feedback. Then, crucially, link to it from your onboarding process and your internal tools.
One software company I worked with embedded the AI policy into their Slack workspace as a pinned message, their onboarding documentation, and their internal wiki. When someone asked about tool usage, they got pointed to the policy. Within two months, policy awareness was above 85%.
Here's another practical step: create a simple approval form for tool requests. Not a lengthy questionnaire, just "What tool are you proposing? What's the use case? What data will it access?" This serves two purposes. First, it forces people to think through the questions that actually matter. Second, it creates a record that helps you iterate on policy. After three months, you've probably seen 10-15 tool requests, and you can identify patterns in what people want and whether your policy is actually addressing their needs.
The Governance Mistakes I See Regularly
After 120+ implementation projects, the mistakes are consistent. The first is treating AI governance as a security problem when it's really a risk-and-utility problem. Yes, security matters. But if your policy makes it harder for people to do their jobs than it makes it safer, people will ignore it. You need to find the balance where you've addressed real risks without creating friction that kills adoption.
The second mistake is making the policy too specific to today's tools. ChatGPT, Claude, and Gemini will be different in six months. Your policy shouldn't require rewriting every time a new version comes out. Instead, frame the policy around capabilities and data categories. That's stable. Tools change; principles don't.
The third mistake is having a policy without a process for updating it. New tools emerge. New risks appear. Legal requirements change. Your policy should have a review cadence, quarterly is reasonable and a simple process for proposing changes. This isn't heavy governance; it's maintenance.
The fourth mistake is creating a policy without actually explaining why. Why do you care about data sensitivity? Why do you require vendor evaluation? If people understand the reasoning, "we need to know where our proprietary data is going because we're liable for it", they'll follow the rules. If it just feels like bureaucracy, they'll work around it.
Testing Your Policy: The Real-World Test
Once you've implemented your policy, test it. Give a few realistic scenarios to people who weren't involved in writing the policy and ask them to answer the questions using only the written policy. Can they answer correctly? If not, the policy isn't clear enough. Iterate based on what you learn.
One firm I worked with ran this exercise with their policy draft and discovered that the language around "internal data" was too vague. People weren't sure whether client metadata counted as internal data or client-sensitive data. So they clarified: metadata that doesn't identify specific clients is internal; anything that does identify clients is client-sensitive. That one clarification made the entire policy more usable.
After 90 days of implementation, you should survey your team: is the policy clear? Have you encountered scenarios it doesn't address? Do you know which tools are approved? The feedback should inform the next iteration. This is how you move from a theoretical policy to one that actually guides behaviour.
The Business Case for Doing This Right
A solid AI policy costs time upfront, maybe 40 hours to develop, pilot, and refine and negligible cost after that. The alternative is doing nothing, which costs you in multiple ways: the risk of proprietary data leaking, the liability of client data ending up in third-party systems, the compliance headache when you're audited, and the friction of employees working around an overly restrictive policy.
More importantly, a clear policy actually enables AI adoption. When people know what they can and can't do, they're more willing to experiment with tools because they know they're within guardrails. A permissive policy with clear limits creates more innovation than a restrictive policy that drives people to use tools covertly.
I worked with a marketing agency where implementation of a clear AI policy increased tool usage by 240% within six months. Not because they'd relaxed restrictions, they'd actually tightened them around client data, but because people finally had clarity about what was acceptable. With that clarity came confidence, and with confidence came adoption.
If you're handling AI governance for your organisation, let's discuss what a practical policy would look like for your specific business. I can help you assess your actual risks, identify which tools matter to your teams, and build a policy that protects your data without killing productivity. Governance done right is invisible; people just know what to do.
Frequently Asked Questions
How long does it take to implement AI automation in a small business?
Most single-process automations take 1-5 days to implement and start delivering ROI within 30-90 days. Complex multi-system integrations take 2-8 weeks. The key is starting with one well-defined process, proving the value, then expanding.
Do I need technical skills to automate business processes?
Not for most automations. Tools like Zapier, Make.com, and N8N use visual builders that require no coding. About 80% of small business automation can be done without a developer. For the remaining 20%, you need someone comfortable with APIs and basic scripting.
Where should a business start with AI implementation?
Start with a process audit. Identify tasks that are high-volume, rule-based, and time-consuming. The best first automation is one that saves measurable time within 30 days. Across 120+ projects, the highest-ROI starting points are usually customer onboarding, invoice processing, and report generation.
How do I calculate ROI on an AI investment?
Measure the hours spent on the process before automation, multiply by fully loaded hourly cost, then subtract the tool cost. Most small business automations cost £50-500/month and save 5-20 hours per week. That typically means 300-1000% ROI in year one.
Which AI tools are best for business use in 2026?
It depends on the use case. For content and communication, Claude and ChatGPT lead. For data analysis, Gemini and GPT work well with spreadsheets. For automation, Zapier, Make.com, and N8N connect AI to your existing tools. The best tool is the one your team will actually use and maintain.
What Should You Do Next?
If you are not sure where AI fits in your business, start with a roadmap. I will assess your operations, identify the highest-ROI automation opportunities, and give you a step-by-step plan you can act on immediately. No jargon. No fluff. Just a clear path forward built from 120+ real implementations.
Book Your AI Roadmap, 60 minutes that will save you months of guessing.
Already know what you need to build? The AI Ops Vault has the templates, prompts, and workflows to get it done this week.