Richard Batt |
Mentoring Junior Developers in the Age of AI: What Changes and What Doesn't
Tags: Development, Leadership
The New Problem We Didn't Expect
Six months ago, I was onboarding a junior developer who'd spent the first week writing surprisingly solid features using Claude. I watched them commit clean, working code that they couldn't explain. Not because they were being careless: they understood what the code we supposed to do, but they didn't understand *why* it did it that way. I asked them to walk me through a function they'd generated, and they said: "The AI wrote it, and it works." That's when I realized we needed a fundamentally different mentoring approach.
Key Takeaways
- The New Problem We Didn't Expect, apply this before building anything.
- What Actually Changed.
- What Didn't Change.
- Practical Mentoring Strategies That Work.
- The Honest Reality, apply this before building anything.
I've mentored dozens of developers over 10+ years. The toolkit I built in 2015 doesn't work in 2026. Here's what changed, what stayed the same, and how to actually develop junior engineers when AI can write production code faster than they can type.
What Actually Changed
Syntax knowledge is no longer the gating factor. Five years ago, I'd spend weeks teaching juniors the patterns of their language: loop syntax, object destructuring, promise chains. That still matters, but it matters less. A junior who doesn't know how to write an async/await chain can ask Claude and move forward. That's not a problem anymore. What *is* a problem is not understanding what problems async/await solves.
This shifts the mentoring burden upward, to the abstraction layer. Instead of teaching syntax, you're teaching systems thinking, architectural patterns, and the performance implications of your choices.
Code review became your primary teaching tool. It used to be that I'd review code to catch bugs and enforce standards. Now it's also where I confirm the junior actually understands what they shipped. I changed my approach: I don't point out mistakes, I ask questions. "Why did you choose this data structure?" "What happens to latency if this query runs 1000 times per request?" "Walk me through the error handling here." If they can't answer, I know the AI wrote it and the junior didn't internalize it.
Architecture and design thinking matter more. When juniors can generate working code, the bottleneck moves to design. Two developers both ask AI to implement a payment system, but one designs it to be testable and the other doesn't. One thinks about failure modes, the other doesn't. I now spend 60% of mentoring time on "how do you think about this problem" and 40% on implementation details.
What Didn't Change
Debugging skills are still non-negotiable. You can't ask Claude to fix a bug you don't understand. I had a junior last year who could generate features with perfect syntax but couldn't debug a database connection issue because they'd never developed that mental model. We spent a full week in the logs and stack traces. It was boring and unglamorous and absolutely essential. That's still true.
Ownership and accountability are still the difference between good engineers and mediocre ones. AI doesn't give you judgment. A junior who ships code without understanding it, without testing edge cases, without thinking about failure modes: that's a liability, not an engineer. I see juniors who use AI as a crutch to skip thinking. The ones who succeed use it as a tool to move faster through the parts they understand, then slow down for the parts that matter.
Communication skills are more important than ever. If a junior can generate code, but can't explain what they're shipping or why, they create chaos. I now do more pair programming and more whiteboarding sessions than I did five years ago. We talk through design decisions before the code exists. The AI handles the implementation; we handle the thinking.
Debugging systems, not just code. Juniors still need to learn how to read logs, trace errors across services, understand deployment issues, and debug in production. These skills are harder to shortcut with AI. You have to actually understand your architecture.
Practical Mentoring Strategies That Work
Pair programming with AI tools built in. I don't ask juniors to ignore Claude. I ask them to use it in front of me. We'll sit together and I'll say, "Let's ask Claude to implement this feature," then we'll read the output together and critique it. Why did it make this choice? What's missing? What's risky? This way, the AI does the grunt work, but the junior has to defend and understand every line. It's fast enough that we can actually explore alternatives.
"Explain this code" exercises. I'll give a junior some production code they didn't write (either from a few months ago or from Claude) and ask them to write documentation or a design document explaining it. Not what it does: why it's architected this way, what tradeoffs it made, what edge cases it handles. This forces deep reading instead of surface comprehension.
Architecture-first assignments. Before I ask a junior to build something, I make them design it. Write out the components, the data flow, the error cases. Then we review the design. Only after the design is solid do they implement it. If they use AI to implement, great: but the architecture work has to be theirs. This flips the dependency: AI generates code, the junior provides direction.
Real debugging sessions. When something breaks: and it will. I make sure the junior is involved in the triage. We're not copying logs into an AI chat and waiting for answers. We're reading stack traces, forming hypotheses, testing them. This is where they learn how systems actually work.
Regular architecture reviews. Every two weeks, I sit with a junior and ask them to teach me about something they shipped. "Why did you structure this module this way?" "What could go wrong?" "How does this scale?" If they can't articulate it, we know the AI wrote it and they rode along. That's a signal to slow down and build understanding.
The Honest Reality
Some juniors will use AI as an excuse to not develop their skills. They'll generate code, ship it, and never internalize it. As a mentor, you have to catch this early and be direct about it. I've had the conversation: "You're writing code faster, but you're not learning faster, and in 18 months that's going to be a problem." It's uncomfortable and necessary.
The flip side: some juniors will be smarter and faster than previous generations. They'll use AI to accelerate through the boring parts and focus their energy on understanding systems at a deeper level. I've seen this too, and it's exciting.
Your job as a mentor is to push them toward the second path. Use AI to accelerate learning, not to replace it. Ownership, judgment, and systems thinking can't be generated by an AI: those come from understanding. That's where you add value.
The Mentoring Role Evolved, But It Didn't Disappear
Five years ago, I taught syntax and patterns. Now I teach judgment and architecture. The tools changed, but the core mission stayed the same: take someone who can execute tasks and help them become someone who can think strategically and own outcomes. AI makes some of that easier (less time explaining syntax) and some of it harder (you can't assume they'll struggle through implementation details). Either way, good mentoring is still the difference between engineers who matter and engineers who just ship code.
Richard Batt has delivered 120+ AI and automation projects across 15+ industries. He helps businesses deploy AI that actually works, with battle-tested tools, templates, and implementation roadmaps. Featured in InfoWorld and WSJ.
Frequently Asked Questions
How long does it take to implement AI automation in a small business?
Most single-process automations take 1-5 days to implement and start delivering ROI within 30-90 days. Complex multi-system integrations take 2-8 weeks. The key is starting with one well-defined process, proving the value, then expanding.
Do I need technical skills to automate business processes?
Not for most automations. Tools like Zapier, Make.com, and N8N use visual builders that require no coding. About 80% of small business automation can be done without a developer. For the remaining 20%, you need someone comfortable with APIs and basic scripting.
Where should a business start with AI implementation?
Start with a process audit. Identify tasks that are high-volume, rule-based, and time-consuming. The best first automation is one that saves measurable time within 30 days. Across 120+ projects, the highest-ROI starting points are usually customer onboarding, invoice processing, and report generation.
How do I calculate ROI on an AI investment?
Measure the hours spent on the process before automation, multiply by fully loaded hourly cost, then subtract the tool cost. Most small business automations cost £50-500/month and save 5-20 hours per week. That typically means 300-1000% ROI in year one.
Which AI tools are best for business use in 2026?
It depends on the use case. For content and communication, Claude and ChatGPT lead. For data analysis, Gemini and GPT work well with spreadsheets. For automation, Zapier, Make.com, and N8N connect AI to your existing tools. The best tool is the one your team will actually use and maintain.
Put This Into Practice
I use versions of these approaches with my clients every week. The full templates, prompts, and implementation guides, covering the edge cases and variations you will hit in practice, are available inside the AI Ops Vault. It is your AI department for $97/month.
Want a personalised implementation plan first? Book your AI Roadmap session and I will map the fastest path from where you are now to working AI automation.