Richard Batt |
What 120+ Projects Taught Me About Shipping Technology That Sticks
Tags: Consulting, Lessons
I've delivered 120+ automation and AI projects. The best ones still run today. The worst ones are abandoned, broken, or being maintained by people who don't understand them. The difference isn't the technology. It's whether the system was built to survive: to be maintained, understood, and improved by real teams over years.
Key Takeaways
- Lesson 1: The Best Technology Is the One Your Team Can Maintain.
- Lesson 2: Documentation Is a Delivery Artifact, Not an Afterthought.
- Lesson 3: Quick Wins Build Political Capital for Bigger Changes.
- Lesson 4: Every Automation Needs an Owner.
- Lesson 5: Scope Creep Kills More Projects Than Bad Code.
I've delivered over 120 projects. Some succeeded beyond expectations. Others worked technically but failed to stick around. A few crashed and burned. In this post, I'm sharing seven lessons that every organization considering automation, AI integration, or custom development should understand. These aren't theoretical: they're patterns I've seen repeatedly.
Lesson 1: The Best Technology Is the One Your Team Can Maintain
This is the hardest lesson to learn because it contradicts what consultants and vendors want to tell you. They want to pitch the newest framework, the most current AI model, the architecture that looks perfect in a Gartner report. But here's what I've seen: the most successful deployments use technologies your team is comfortable supporting.
I once built an impressive data pipeline for a financial services client using a hot new open-source framework. It was elegant. It ran fast. Three months after handover, they asked me back to fix a performance issue. I discovered the team had never touched the codebase because nobody understood it. They didn't have the context or expertise. We ended up rebuilding it using boring, standard SQL and basic Python: technologies every senior analyst in the company could maintain. The second version ran slower but stayed alive.
When you're evaluating a solution, always ask: "Who on our team will maintain this in three years, and do they have the skills today?" If the answer requires hiring or extensive training, that's a risk. You can mitigate it, but you can't ignore it.
Lesson 2: Documentation Is a Delivery Artifact, Not an Afterthought
I've worked with clients who treat documentation like a nice-to-have added at the end of a project. Invariably, the documentation is either missing, outdated, or so technical that only the original developer can use it. Then the developer leaves, and the system becomes a black box.
The teams that win treat documentation as something you build alongside the system. Not after. Not as phase five. Alongside. This means: setup guides that a new engineer can follow blindfolded, decision logs explaining why you built it this way instead of that way, runbooks for common operations, and clear points of contact for each system component.
I had a healthcare client implement a workflow automation system. The implementation team spent two weeks after go-live documenting everything: how to add a new workflow, how to troubleshoot common errors, what to do if the system gets stuck. Eighteen months later, a junior admin on their team handled a complex workflow addition using only that documentation. No consultant needed. That system is still running and evolving.
Treat documentation as part of the deliverable. Budget for it. Assign someone ownership of it. Your future self (or future team) will thank you.
Lesson 3: Quick Wins Build Political Capital for Bigger Changes
Large transformations fail in organizations when the political foundation isn't there. People don't believe change will work. They're cynical about the consultant's promises. They've burned before. You can't overcome this with a presentation deck.
But you can build trust through small, visible wins. I worked with a manufacturing company considering a major production floor automation initiative. Instead of starting there, we spent four weeks automating a painful manual report that operations created every Friday. It took two hours of their week. We had it down to two minutes with a simple Python script and a scheduled email.
Suddenly, people believed we could deliver. They saw tangible results. When we proposed the bigger floor automation project, the CFO approved it immediately because they'd seen proof we could execute. That project went on to save the company $1.2M annually. It wouldn't have happened without the small win first.
Always look for the 20% effort that delivers 80% credibility with your stakeholders.
Lesson 4: Every Automation Needs an Owner
Automated systems without an owner don't stay automated for long. They break silently. They drift out of sync with business requirements. They become technical debt nobody wants to touch.
I consulted for a company that built a customer data sync process between their CRM and data warehouse. It worked beautifully for six months. Then things changed slowly: APIs were deprecated, data formats shifted, the business changed how they categorized customers. The system kept "running," but it was syncing corrupted data. By the time they discovered the problem, they'd made business decisions on bad data.
There was no owner. Everyone assumed someone else was watching it. Before you hand off any automation, identify who owns it. That person should understand what it does, have access to logs, know how to fix common failures, and have time allocated to monitor it. This person's job description should include this system.
Without an owner, automation eventually becomes a liability.
Lesson 5: Scope Creep Kills More Projects Than Bad Code
I rarely see projects fail because the code is bad. I see them fail because the scope became unmanageable. Three months in, stakeholders realize the system could also handle this other use case, and while we're at it, we should add that feature, and it would be nice if it could integrate with this tool...
One software automation project I managed started as "build an invoice processing system." By month two, it had expanded to include expense categorization, vendor management, multi-currency support, and API integrations with five accounting platforms. The original team of three wasn't enough. The timeline kept extending. The code became a mess because every change felt urgent.
We stopped, reset the scope to the original invoice processing, shipped that in 8 weeks, and then planned the expansions as separate projects. Each one succeeded because we had clear boundaries and realistic timelines.
Scope creep happens because it sounds reasonable in meetings. But reasonable scope growth at 5% per week becomes unmanageable. Protect your project by being very strict about the initial scope and making everything else a future conversation.
Lesson 6: The Handover Plan Matters as Much as the Build
I've built systems that technically worked perfectly but couldn't transition to the client team because we didn't plan the handover. The consultant's knowledge we locked in the consultant's head. The team felt abandoned after launch. Support issues went unfixed because nobody understood the system.
A healthcare provider I worked with insisted on treating handover as a formal part of the project, not an afterthought. We documented decision points, ran practice scenarios where the internal team handled failures, created escalation paths, and established a 90-day overlap where they could call me with questions. The team went from "we can't do this" to "we've got this" because we invested in the transition.
When you're planning a project, the handover shouldn't be phase five: it should be designed from the start. Plan for: knowledge transfer sessions, documentation creation, practice scenarios, and a clear escalation process for the first 90 days.
Lesson 7: Measuring Before and After Is Non-Negotiable
Too many automation projects live or die by anecdote. "It feels faster." "People seem happier." These are not measurements. You need hard data before and after to know if the investment actually worked.
I worked with a customer service team implementing an AI chatbot to handle tier-1 support. They expected it to reduce ticket volume by 50%. We measured carefully: before the chatbot, they handled 200 tickets daily with an average resolution time of 6 hours. Six months post-launch, they handled 220 tickets daily but average resolution was 4.5 hours: because the chatbot was handling simple questions, human agents could focus on complex ones. Total work was actually up, but quality and speed were better. Without measurement, they would have called the chatbot a failure.
Before you build, identify the metrics that matter to your business. Measure them before you launch. Measure them regularly after. This gives you data to decide: Is this working? Should we expand it? Should we try a different approach?
What All These Lessons Have in Common
If I had to summarize these seven lessons, they all point to one insight: the best shipping technology consulting focuses on how software fits into your human organization, not just how technically correct it is. The technology is important, but it's the least interesting part of the problem. The interesting part is: Will your team use it? Can they maintain it? Does it actually improve your business?
Over 10 years and 120+ projects, the consultants and leaders who ship things that stick are the ones who stay obsessed with these human questions alongside the technical ones. They know that a mediocre solution your team maintains for five years beats a perfect solution that dies after six months.
If you're considering a major automation or AI initiative, these lessons are worth thinking through carefully. They'll save you time, money, and frustration.
Frequently Asked Questions
How long does it take to implement AI automation in a small business?
Most single-process automations take 1-5 days to implement and start delivering ROI within 30-90 days. Complex multi-system integrations take 2-8 weeks. The key is starting with one well-defined process, proving the value, then expanding.
Do I need technical skills to automate business processes?
Not for most automations. Tools like Zapier, Make.com, and N8N use visual builders that require no coding. About 80% of small business automation can be done without a developer. For the remaining 20%, you need someone comfortable with APIs and basic scripting.
Where should a business start with AI implementation?
Start with a process audit. Identify tasks that are high-volume, rule-based, and time-consuming. The best first automation is one that saves measurable time within 30 days. Across 120+ projects, the highest-ROI starting points are usually customer onboarding, invoice processing, and report generation.
How do I calculate ROI on an AI investment?
Measure the hours spent on the process before automation, multiply by fully loaded hourly cost, then subtract the tool cost. Most small business automations cost £50-500/month and save 5-20 hours per week. That typically means 300-1000% ROI in year one.
Which AI tools are best for business use in 2026?
It depends on the use case. For content and communication, Claude and ChatGPT lead. For data analysis, Gemini and GPT work well with spreadsheets. For automation, Zapier, Make.com, and N8N connect AI to your existing tools. The best tool is the one your team will actually use and maintain.
Put This Into Practice
I use versions of these approaches with my clients every week. The full templates, prompts, and implementation guides, covering the edge cases and variations you will hit in practice, are available inside the AI Ops Vault. It is your AI department for $97/month.
Want a personalised implementation plan first? Book your AI Roadmap session and I will map the fastest path from where you are now to working AI automation.