← Back to Blog

Richard Batt |

10 Questions to Ask Before You Spend a Penny on AI

Tags: AI Strategy, Business, Assessment

10 Questions to Ask Before You Spend a Penny on AI

Your CFO asks: "Should we invest in AI?"

Your team responds: "Yes, everybody's doing it."

Three months and £100,000 later, you have a demo that impresses nobody and a project that's going nowhere.

87% of AI projects never make it to production. Not because the technology doesn't work. Because the business wasn't ready.

Key Takeaways

  • Most AI failures start before any code is written, in the decision-making stage
  • These 10 questions separate projects that will work from projects that won't
  • A "ready/almost/not yet" scoring rubric tells you if you should start now or prepare first
  • The most important question isn't about technology, it's about who owns the outcome
  • You can use this checklist to kill a bad project and save six months of work

Why 87% of AI Projects Fail (Before They Start)

I've seen this pattern a hundred times. A company gets excited about AI. They read a case study about another company saving £500,000/year. They think: we could do that.

So they hire a consultant or a contractor. Or they allocate budget to the IT team. "Build us an AI," they say.

Six months later, the project is one of these:

  • Technically correct but nobody uses it (adoption failed)
  • Correct in testing but breaks in production with real data
  • Accurate but the business case never materialised (you saved 10 hours, not the 40 you promised)
  • Perfect but the company restructured and priorities changed midway
  • Brilliant but it cost £300,000 and the ROI never penciled out

The common thread: none of these failures are technical. They're all business failures.

The team that would use the AI wasn't consulted in the design. Nobody measured the baseline before you started. The executive sponsor moved to a different department. The problem you were solving wasn't actually the problem.

This is why I ask 10 questions before a single line of code gets written. These 10 questions are the difference between a project that works and a project that dies at month five.

The 10 Questions

Answer these honestly. Don't answer them the way you wish they were. Answer them the way they actually are.

1. Can You Describe the Problem in One Sentence?

Good answer: "Our customer service team spends 25 hours/week triaging support emails to the right department."

Bad answer: "We want to use AI to improve customer service."

If you can't describe the problem specifically, you don't understand it well enough to automate it. A vague problem leads to a vague solution. A vague solution doesn't solve anything.

The specificity test: would someone reading your answer know exactly what you're automating? If they'd say "wait, what part of customer service?" then your answer isn't specific enough.

2. Do You Have a Baseline Measurement?

Good answer: "Triaging takes 25 hours/week today. We've measured it by tracking time cards and logging for the past two months."

Bad answer: "It takes a lot of time. We think AI could save maybe half of it."

If you don't measure before you start, you won't know if you succeeded afterward. You'll guess. You'll be wrong. And you won't ask for budget to scale because you can't prove it works.

What counts as a baseline: hours spent, errors per week, customer response time, whatever metric proves the problem exists right now.

3. Does the Team Doing the Work Know About This Project?

Good answer: "Yes, we've talked to the three people who do this work daily. They think there's potential. We're including them in the design."

Bad answer: "Not yet. We wanted to surprise them with the finished product."

The people doing the work will adopt the solution if they helped design it. If you surprise them with an AI that changes their daily work without consulting them, they'll reject it.

This isn't optional. If you skip this step, expect adoption failure.

4. Is There an Executive Sponsor Who'll Still Be Here in Six Months?

Good answer: "Sarah owns this. She's the VP of Operations. This is on her OKRs. It's not a side project."

Bad answer: "The CEO is interested, but he's also interested in 12 other things."

The project needs someone who's responsible for it. Not someone who's interested. Someone who owns the outcome.

That person should be the one whose team will benefit. Not IT. The VP or manager of the department doing the work.

And they need to still be there when you need a decision in month four. If there's a chance your sponsor is being reorganised or leaving soon, that's a red flag.

5. What's the Worst Case If This Fails?

Good answer: "We spend £40,000 on this. If it doesn't work, we've learned a lesson and our team is back to the old process."

Bad answer: "If it fails, we've wasted everything. This is our only shot."

If the stakes are too high, you'll make bad decisions under pressure. You'll rush a launch. You'll ignore red flags. You'll pressure the team to use something that doesn't work because you've already invested so much.

A good AI project is one where you can afford to fail. You learn, you adjust, you try again.

6. Do You Know Which Team Member Will Own This After Launch?

Good answer: "Tom will own it. He'll monitor accuracy, gather feedback, and decide when to retrain. That's in his job description starting next quarter."

Bad answer: "We'll figure that out after it's built."

This is the silent killer of AI projects. The automation launches. Everyone's excited. Then nobody is officially responsible for maintaining it. Three months later, nobody knows if it's still working.

Assign ownership before you build. That person should be part of the design so they know what they're getting into.

7. Can You Define Success in Measurable Terms?

Good answer: "Success is: 80% of emails are triaged automatically within two weeks, accuracy is 95%+, and the team spends 15 hours/week instead of 25."

Bad answer: "Success is the AI works well and people use it."

"Works well" is meaningless. "Saves time" is vague. Pick metrics. Real ones. Time saved in hours. Accuracy in percentage. Error rate in absolute numbers.

These should be metrics you can measure every week after launch.

8. What Happens to the Person Whose Job Becomes Easier?

Good answer: "The triage person currently spends 25 hours/week on email sorting. If the AI works, they spend 15 hours on that. They'll use the extra 10 hours to handle escalations and build relationships with key clients."

Bad answer: "We haven't thought about that."

This matters for adoption and for morale. If people think AI is eliminating their job, they'll work against you. If people think AI is eliminating the boring 40% of their job and making their work more interesting, they're your allies.

Be clear. Proactively. Before the project launches.

9. Do You Have Budget for Maintenance, Not Just Launch?

Good answer: "We budgeted £40,000 to build this. We also budgeted £5,000/year for maintenance, retraining, and improvements."

Bad answer: "We're spending all our budget on the initial build."

AI systems drift. The data changes. The model gets stale. You need ongoing budget for updates and monitoring. If you can't afford it, you can't afford the initial build.

10. If This Project Takes Six Months Instead of Three, Do You Still Want to Do It?

Good answer: "Yes. The problem is costing us £500/week. Even if we're six months delayed, we're saving money."

Bad answer: "If it takes that long, we'll kill it and try something else."

Projects take longer than expected. Estimates are wrong. If you don't have patience for the actual timeline, don't start.

But also: if a six-month delay means the ROI never materialises, maybe this isn't the right project to start.

The Scoring Rubric: Ready / Almost / Not Yet

Score each question on this scale:

READY (Green), You have a clear answer that shows you're prepared to execute.

ALMOST (Yellow), You have a partial answer, but there's work to do before you start building.

NOT YET (Red), You don't have an answer and you need to stop and figure this out.

Here's the rule:

  • 8+ GREEN: You're ready. Start building.
  • 6-7 GREEN: You're almost ready. Spend one week fixing the yellow items. Then start.
  • Fewer than 6 GREEN: You're not ready. Don't build. You'll fail. Spend two weeks answering the red items first.

If you have three or more RED answers, stop. Do not allocate budget. Do not form a team. Do not schedule a kickoff. Fix the red items first.

Red Flags That Mean You Should Wait

These aren't questions, but they're signs that your project will fail:

Red Flag 1: Nobody's Used Your Baseline Data Since You Measured It

You measured email triaging as 25 hours/week. That was six weeks ago. You haven't looked at it since. Things have changed. Re-measure. The baseline matters.

Red Flag 2: Your Sponsor Has Been Promoted or Is Leaving

Find a new sponsor immediately. Or wait. A project without a present sponsor will die when decisions get hard.

Red Flag 3: "This Is Our One Chance to Succeed at AI"

If the stakes are that high, the project is already under pressure to succeed. That pressure leads to bad decisions. Pick a lower-stakes project first. Win. Then try something harder.

Red Flag 4: "IT Is Building This for the Business"

IT can build it. But the business owner should be leading it. If IT is running the project, adoption will fail.

Red Flag 5: "We're Starting with a Pilot, but We've Already Promised Deployment to 200 People"

You can't have a real pilot if everyone's expecting full deployment in two months. Either commit to the pilot and delay the rollout, or don't do a pilot.

Why This Checklist Actually Works

I've used this 10-question checklist with my clients for five years. Here's what I've learned:

The questions where projects fail most often are: 3 (team consultation), 4 (executive sponsor), 6 (ownership), and 7 (measurable success).

If you get those four right, you have a 90% chance of success. If you get them wrong, you have a 5% chance.

The other six questions matter. But those four are the predictors.

So if you want to shortcut this: answer questions 3, 4, 6, and 7 first. If you're solid on those, the rest usually follows.

FAQ

Can we skip the checklist if we've already started the project?

Not really. If the project is already in progress, answer the checklist for where you are now. If you're getting red flags on questions 3, 4, or 6, you have a serious problem that will surface in month five if you don't fix it now.

What if we get a RED answer on something that seems minor?

Nothing is minor. If you can't answer question 8 (what happens to the person whose job changes), that's a red flag for adoption. If you can't answer question 6 (who owns it afterward), that's a red flag for sustainability. Fix the red items. Don't ignore them and hope they work out. They won't.

We're a startup and we don't have time for this checklist. Can we just build?

The questions take two hours to answer. If you build without answering them, you'll spend two months finding out your project will fail. Answer the questions. Save your time.

What if our answer to question 2 (baseline measurement) is "we don't have the data yet"?

Then you're not ready. Spend two weeks measuring. See what the actual problem is. Often, executives guess the problem wrong. The baseline measurement will show you what actually matters.

If we fail this checklist, do we never do AI?

No. You do AI on a different problem. Find a problem that passes this checklist. Solve that one first. Win. Then the next project is easier because your team has experience.

Richard Batt has delivered 120+ AI and automation projects across 15+ industries. He helps businesses deploy AI that actually works, with battle-tested tools, templates, and implementation roadmaps. Featured in InfoWorld and WSJ.

Frequently Asked Questions

How do I know if my business is ready for AI?

You are ready if you have at least one process that is repetitive, rule-based, and takes meaningful time each week. You do not need perfect data or a technical team. The AI Readiness Audit identifies exactly where to start based on your current operations, data, and team capabilities.

Where should a business start with AI implementation?

Start with a process audit. Identify tasks that are high-volume, rule-based, and time-consuming. The best first automation is one that saves measurable time within 30 days. Across 120+ projects, the highest-ROI starting points are usually customer onboarding, invoice processing, and report generation.

How do I calculate ROI on an AI investment?

Measure the hours spent on the process before automation, multiply by fully loaded hourly cost, then subtract the tool cost. Most small business automations cost £50-500/month and save 5-20 hours per week. That typically means 300-1000% ROI in year one.

What Should You Do Next?

If you are not sure where AI fits in your business, start with a roadmap. I will assess your operations, identify the highest-ROI automation opportunities, and give you a step-by-step plan you can act on immediately. No jargon. No fluff. Just a clear path forward built from 120+ real implementations.

Book Your AI Roadmap, 60 minutes that will save you months of guessing.

Already know what you need to build? The AI Ops Vault has the templates, prompts, and workflows to get it done this week. Use the 10-question checklist as your pre-flight test. If you pass all ten, the Vault will give you the tools to execute.

← Back to Blog