← Back to Blog

Richard Batt |

The AI Maturity Ladder: Where Does Your Business Actually Sit?

Tags: AI Strategy, Assessment

The AI Maturity Ladder: Where Does Your Business Actually Sit?

150+ companies over a decade. Pattern: most don't know where they stand with AI. They claim "scaling" while still experimenting. Or "piloting" with zero movement. This confusion costs months and millions in wasted effort.

Key Takeaways

  • The Five Levels of AI Maturity, apply this before building anything.
  • Level 1: Curious (No Real AI Yet).
  • Level 2: Experimenting.
  • Level 3: Piloting.
  • Level 4: Scaling.

That's why I built the AI Maturity Ladder. It's not a rigid framework with a 47-point checklist. It's a simple way to look at your organization and say, "Okay, we're here, and here's what moving up actually looks like."

The Five Levels of AI Maturity

Think of these as distinct phases. You don't leap from Level 1 to Level 5. You climb. And there's specific work that happens at each level.

Level 1: Curious (No Real AI Yet)

At Level 1, you're asking questions like "Should we use AI?" and "What could AI do for us?" You might have a ChatGPT account. Someone played around with Claude. you watched a webinar. But there's no organizational commitment, no budget allocated, and no clear business problem you're trying to solve.

What it looks like: Board mentions AI as a risk. Marketing talks about it in pitch decks. Engineering is skeptical. No one's accountable. There's energy, sure, but no direction.

The mistake I see most: trying to hire an "AI expert" and expecting them to define your strategy. That's backwards. Strategy comes first. Hiring comes after.

How to move up: Pick one small problem: a real business problem that has a clear success metric. Not "improve efficiency." Something specific like "reduce time to close on sales opportunities by 25%" or "cut customer support ticket volume by 15%." Start there.

Level 2: Experimenting

You've identified problems worth solving. You're running 3-5 small experiments with AI. one team is using AI to summarize meeting notes. Another is testing an AI chatbot for first-line support. Finance is playing with document classification. You've got a budget. You're learning.

What it looks like: Multiple teams running pilots in parallel. Data quality discussions are happening. People are excited and skeptical in equal measure. Nothing is live in production yet, but you're getting real data on what works.

The mistake I see most: running too many experiments at once. I watched a healthcare company spin up 12 pilots simultaneously, and guess how many shipped? Zero. They burned out their team and diluted focus. My rule: start with 3-5 focused experiments, see them through, then expand.

How to move up: Finish at least one experiment. Don't move to Level 3 until you've shipped something: even if it's small. You need proof that AI can solve your problem in your context with your data.

Level 3: Piloting

You've proven the concept. One or two use cases are generating measurable ROI. You're running controlled pilots with real users, real data, and real stakes. You have a 90-day roadmap. There's an owner responsible for each initiative. You're measuring things that matter: cost reduction, time saved, quality improvement, revenue impact.

What it looks like: A 50-person department is using an AI tool to prioritize which customers to call on. Your support team is using AI to draft responses, cutting handle time from 12 minutes to 8 minutes. Finance cut invoice processing time from 4 days to 8 hours. These are live, in production, being used every day by real people.

The mistake I see most: overstating results. A company will do a pilot that saves 5% and market it internally as a 20% opportunity. This kills credibility when the actual implementation delivers 5%. Be honest about what you've measured.

How to move up: Expand your best pilot to a broader user base. Take what worked in a 50-person team and roll it out to 200 people. Instrument it. Measure everything. Make sure it still works at scale.

Level 4: Scaling

You're running 5-10 AI initiatives across your organization. They're all live. They're all generating measurable value. You've built processes around model updates, data management, and user feedback. You're not chasing shiny objects anymore: you're optimizing what you've built.

What it looks like: AI is woven into the day-to-day work of hundreds or thousands of employees. It's not special anymore. It's just how things work. You're asking questions like "How do we improve our model's accuracy by 2%?" instead of "Should we try AI?"

The mistake I see most: losing focus on business outcomes. Companies at this level sometimes slip into technical excellence as a goal than business impact. A model might be beautiful, but if it doesn't move the needle on what matters, it's a cost center, not a value creator.

How to move up: This is the hard part. You need to build AI-native workflows where AI is core to the business model, not bolted on afterward. It's not just a tool your team uses: it's how you deliver value to customers or how you run operations fundamentally differently.

Level 5: AI-Native

At this level, AI isn't an initiative. It's how you work. Your business model has AI baked into it. You make decisions based on AI predictions. You've reengineered your entire workflow around what AI makes possible. You're probably doing this with proprietary models trained on your own data. Your competitive advantage includes AI.

What it looks like: A fintech company routes every deal through an AI underwriting model. The model has learned your data, your risk appetite, your customer segments. You close loans 40% faster and with 8% better ROI than competitors. Or a logistics company uses AI not just for route optimization but for demand forecasting, dynamic pricing, and customer retention: and these systems talk to each other. AI is the business.

The mistake I see most: companies skip to this and fail catastrophically. I've seen a manufacturer try to go straight from Level 1 to Level 5 because the CEO was excited about AI. They built a sophisticated forecasting model, but no one used it because it required changing how sales and operations worked together. It sat unused for a year.

Quick Self-Assessment Questions

Be honest: Where do you actually sit?

  • Do you have a business problem tied to an AI initiative, or are you exploring AI as a concept?
  • Have you shipped anything to production, or are you still planning?
  • Can you quantify the impact of your AI work in dollars or key metrics?
  • Do 5+ teams across the company use AI regularly, or is it isolated?
  • Is AI part of how your business model works, or is it a capability you've added on?

This isn't aspirational assessment. It's where you are today, right now.

Why Companies Get Stuck Between Levels

I see predictable patterns where companies stall out. Understanding these helps you avoid them.

Stuck Between Level 1 and Level 2: The CEO is excited. You hire someone smart. But there's no problem to solve, just enthusiasm. Six months in, you realize you've built beautiful models for problems no one cares about. The hire leaves. You're back to Level 1 with less credibility.

Fix: Start with the business problem. Not with the technology. Not with hiring. Problem first.

Stuck Between Level 2 and Level 3: You've got multiple experiments running. Some are promising. But shipping one to production requires redesigning work, training people, and getting buy-in from the team that has to use it. That's harder than building the model. Most companies don't push through. The experiments live forever in pilot status.

Fix: Commit to shipping. Accept that one pilot won't be perfect. Get it live. You learn more from one shipped project than five perfect pilots.

Stuck Between Level 3 and Level 4: You've shipped one or two successful AI initiatives. But expanding requires more data science, more infrastructure, more coordination across teams. Your one successful project was a special case. Scaling it across the organization reveals it wasn't as generalizable as you thought.

Fix: Plan for replication early. Ask: "What did we build that can be duplicated? What was specific to this one problem?" Use that to guide what you build next.

The Time Horizon for Each Level

How long does it actually take to move from one level to the next? Based on what I've seen:

Level 1 to Level 2: 3-6 months. Identify problems, hire or allocate people, run experiments.

Level 2 to Level 3: 6-9 months. Finish at least one experiment, redesign work around it, ship to production, measure impact.

Level 3 to Level 4: 12-18 months. Expand your successful use case, build a second and third initiative, create infrastructure to support multiple projects.

Level 4 to Level 5: 18+ months and usually a business model shift. This is the hard part. It's not just executing better. It's rethinking how your business works fundamentally.

Total time from zero AI to AI-native: usually 3-4 years for a company that's disciplined and focused. Longer if you're going slow. Much longer if you keep changing priorities.

Most companies I talk to want to jump to Level 4 in 12 months. It doesn't happen. Not because AI isn't capable. Because organizational change is slower than technology change.

The Climb Matters

I could tell you that you should be at Level 4 by end of year. Some consultant probably is. But I've learned that the climb is what matters. The discipline of moving from Level 1 to Level 2 teaches you how to move from Level 2 to Level 3. You can't skip steps without breaking something.

I've also learned that not every company needs to reach Level 5. Some businesses benefit enormously from Level 2 or Level 3. A regulatory firm might never be AI-native, but they could be far more efficient with AI-powered research and contract review. That's enough. A logistics company might stop at Level 4: excellent at using AI to improve operations but not building AI into the customer product.

Your job is to figure out where you are, what the next level looks like for your business, and what it takes to get there. The maturity ladder is just a way to think about it clearly. It's not a race. It's a map.

Richard Batt has delivered 120+ AI and automation projects across 15+ industries. He helps businesses deploy AI that actually works, with battle-tested tools, templates, and implementation roadmaps. Featured in InfoWorld and WSJ.

Frequently Asked Questions

How long does it take to build AI automation in a small business?

Most single-process automations take 1-5 days to build and start delivering ROI within 30-90 days. Complex multi-system integrations take 2-8 weeks. The key is starting with one well-defined process, proving the value, then expanding.

Do I need technical skills to automate business processes?

Not for most automations. Tools like Zapier, Make.com, and N8N use visual builders that require no coding. About 80% of small business automation can be done without a developer. For the remaining 20%, you need someone comfortable with APIs and basic scripting.

Where should a business start with AI implementation?

Start with a process audit. Identify tasks that are high-volume, rule-based, and time-consuming. The best first automation is one that saves measurable time within 30 days. Across 120+ projects, the highest-ROI starting points are usually customer onboarding, invoice processing, and report generation.

How do I calculate ROI on an AI investment?

Measure the hours spent on the process before automation, multiply by fully loaded hourly cost, then subtract the tool cost. Most small business automations cost £50-500/month and save 5-20 hours per week. That typically means 300-1000% ROI in year one.

Which AI tools are best for business use in 2026?

It depends on the use case. For content and communication, Claude and ChatGPT lead. For data analysis, Gemini and GPT work well with spreadsheets. For automation, Zapier, Make.com, and N8N connect AI to your existing tools. The best tool is the one your team will actually use and maintain.

What Should You Do Next?

If you are not sure where AI fits in your business, start with a roadmap. I will assess your operations, identify the highest-ROI automation opportunities, and give you a step-by-step plan you can act on immediately. No jargon. No fluff. Just a clear path forward built from 120+ real implementations.

Book Your AI Roadmap, 60 minutes that will save you months of guessing.

Already know what you need to build? The AI Ops Vault has the templates, prompts, and workflows to get it done this week.

← Back to Blog