Depth Beats Breadth When Coding With AI

The bar has been raised. Using AI in software development is no longer optional - its table stakes. Increasingly powerful consumer-facing LLMs such as ChatGPT and Claude have reduced the gap in productivity amongst junior, intermediate, and senior engineers. The differentiating factor today isn’t whether AI is in your stack, but how you optimise it, and whether you remain in control while doing so.

My experience designing systems from scratch - often in lean teams such as @Rease or completely solo for @Dinner Party - has revealed the most effective way to leverage AI while still shipping a product you genuinely understand and own. In this matter, it is clear that depth-first development rather than breadth-first is essential.

This article will benefit those engineers working in lean startup environments. It’s written for developers in small teams, facing large codebases and pressure to move fast. I’ve been there personally, staring at the massive backlog and asking “Where do I even start?”, with the temptation to ask AI to generate everything at once. What follows is the approach I use to increase output without losing control.

Slowing Down to Speed Up

Depth-first feels slow at first. By depth-first, I mean building one vertical slice of a system - logic, data, and front end - to a production-ready standard before moving on. Rather than shipping every UI screen upfront, depth-first produces a small section of the application that is genuinely complete. As a result, progress looks understated since there aren’t any flashy demos for stakeholders.

But breadth-first speed is deceptive. AI can provide you with a considerable amount of output in an instant, and with all this boilerplate code, scaffolding, and utility - your imperfect memory starts to lose track of what you’ve built and how it fits together. Unfortunately, AI isn’t immune to this either.

Changes later become expensive because everything is conflicting or entangled. Infrastructure built to anticipate future needs often collapses the moment real requirements appear. Depth-first development prioritises one working vertical at a time - so when something works, it really works. From this stable foundation, expansion becomes faster, safer and more predictable. By resisting the temptation to optimise for appearances, long-term velocity is now driven by cohesion, not volume.

Depth Feels More Natural

Depth-first development mirrors the way humans naturally learn. Focusing on one module forms strong mental models of that domain. Your knowledge of the conventions, patterns, and past decisions applied to the codebase are reinforced, making future extensions easier because you’re adding to your understanding, rather than juggling it. Depth-first work respects the limits of functional memory, which is in high demand especially in solo or small-team environments. 

Breadth-first fractures attention and cognitive load across too many domains at once. Conventions, shared abstractions, and design decisions fade from working memory, as the cognitive expense of constantly switching contexts becomes too great. This eventually slows down the engineer’s problem solving and critical thinking, as energy is misplaced in recalling the large number of resources created at the outset. Progress slows down, even if output appears high. 

A depth-first approach also aligns with what psychology describes as the Progress Principle, by completing small, achievable segments of a system to create an authentic sense of headway. What you’ve built is finished - and it works. The complexity of the task engineers perform is incrementally and neatly conquered, which compounds confidence and promotes further productivity. 

Breadth-first work delays completion and substitutes it with the illusion of momentum. Screens exist, but nothing is truly complete. When the time finally comes to “fill in the logic,” the system feels just as overwhelming as it did on day one, except now the surface area is larger.

Depth Sharpens AI Output

Anyone who’s used AI extensively knows that its output improves when the context of the conversation is stable and reinforced. It follows that the more time spent working with AI on a single module leads to it having a robust understanding of the codebase’s intent and constraints within that domain. The resultant output is more precise and relevant.

When AI is used with a breadth-first philosophy - jumping across many modules and concerns within one conversation - its attention is scattered. Context degrades, and hallucinations become more likely. Subtle inconsistencies creep in, slowing the engineer through additional prompting - or worse, lying dormant until a future bug hunt.

In practice, I treat one AI conversation per module as a rule of thumb, with conversation states preserved and reusable should a module be worked on again. At checkpoints, I’ll often ask AI to generate a concise context summary with the intention of priming future conversations with the relevant information. This orients the conversation while allowing the relevant subject to remain in focus and grants AI the chance to become a domain expert alongside you, rather than a distracted assistant. Ultimately, AI doesn’t have a global, reliable memory of your system. You do.

Depth Limits the Blast Radius

Every software engineer has been there. You change one element of your code and suddenly you’re battling an uphill battle of complications preventing the code from running. Or even worse, the bugs lie dormant for the near future, and only arise when testing. At that point, too much code has been developed to truly isolate the issue, and so you scour through commits to find the single source of error. Depth-first development narrows this blast radius.

By focusing on one module or vertical, bugs are easier to trace, isolate, and remedy. You are able to understand why something broke, rather than only knowing what broke. Breadth-first AI development often produces large amounts of boilerplate and architecture upfront. Although this may look impressive and future-proof, its underlying rigidity in the face of pivoting or unnoticed requirements is a glaring vulnerability. Its wide scope creates ripples across multiple modules, where fixing one issue often spawns several more - a hydra effect. As a result, regression testing becomes painfully slow.

With depth-first work, you validate the viability of each component before outward expansion. If performed in reverse - building breadth first - everything can look complete until reality sets in, and changes force you to refactor logic that now touches half the codebase.

Trade-Offs To Consider

This wouldn’t be an honest review of a depth-first philosophy without acknowledging its downsides. As mentioned above, it can feel inefficient and discouraging early on compared to the artificial momentum breadth-first AI creates - especially to non-technical stakeholders. Initial work is also more difficult for these stakeholders to acknowledge, since depth isn’t obvious to those lacking domain knowledge. Furthermore, anyone who doesn’t have this domain knowledge to a certain degree will struggle to implement this approach effectively, as AI can’t compensate for missing fundamentals (this ties directly to “AI only knows what you ask it”). Finally, it’s less forgiving of late pivots, as it requires concrete requirements to implement the deep logic work inherent to the methodology.

These are definite trade-offs. However, it is important to consider that most of these are avoidable with clear user research, planning, and confidence in the initial direction. Furthermore, having one or many skilled engineers to guide this process is a prerequisite. In early exploration phases, breadth-first prototyping can prove useful - especially when validating ideas visually. However, once the first line of production code is written, pivots should be rare - not routine.

A Final Perspective

To be clear, depth-first development does not mean avoiding infrastructure entirely. A minimal, conventional foundation is still required and may include authentication scaffolding, routing, data access patterns, and basic architectural boundaries. I encourage the foundation to be well-documented, and aligned with best practices or frameworks (such as MVVM or similar patterns). However, it should exist to support depth - not anticipate every future requirement. Beyond that baseline is where depth-first development should begin.

As I’ve repeatedly stated in my previous work: AI hasn’t removed the need for engineering judgment - it’s amplified it. Depth-first AI usage doesn’t reject speed, but channels it. It allows engineers to move fast without losing understanding, and to master each component of the codebase before moving forward. AI can generate breadth endlessly - a skill which is now accessible to everyone. It’s the mark of an experienced engineer to know when AI serves understanding - and when it undermines it.

Next
Next

AI Only Knows What You Think to Ask