I Built and Launched a Production App in 7 Days
A year ago, I would have completely dismissed that sentence.
I’ve been immersed in software engineering for close to a decade - beginning with formal tertiary education and continuing through years of professional practice. I have built systems from concept to production, navigated deployments, and wrestled with architectural trade-offs that most users never see. You can imagine, then, my involuntary reaction when I first encountered the phrase “vibe coding.”
I was defensive. It felt reductive. What unsettled me was not the technology itself, but what it represented. The concept of flattening a discipline so casually and suggesting that years of dedication could be replaced by well-phrased prompting touched a nerve. So I avoided it. Although my utilisation of AI saw creative applications in every other aspect of my life, it had been unfairly relegated when it came to its assistance with my engineering. I had formed a conclusion without understanding what it actually was.
My perspective began to shift after reading Wisdom Takes Work by Ryan Holiday. One idea stood out: be cautious of the concepts you reject most quickly. At that moment, I realised my discomfort might be signalling stubbornness rather than prudence. I had formed this opinion about AI-assisted development without even experimenting with it meaningfully. If my scepticism was valid, it should survive direct exposure. So, I decided to treat AI tools not as a philosophical debate, but as an engineering experiment.
The Illusion of Full Automation
My research began with consumer-facing platforms such as Replit - environments that claim to generate applications end-to-end. My initial impression, to their credit, was positive. Their claims weren’t incorrect - with relatively minimal instruction, they can produce something that resembles a working product. Interestingly, that is also their weakness.
The abstractions were often unnecessary. Small changes required disproportionate prompting, and inefficiencies accumulated with circular logic. It became clear to me that these systems are capable but not discerning - that is, they optimise around the request rather than long-term stability.
More worryingly, I noticed a pattern: without technical judgement, you cannot reliably tell when the model is overcomplicating a problem, introducing fragility, or simply patching a deeper issue. The output looked complete. It compiled. But it was not necessarily well constructed. In this instance, AI accelerated production, but introduced a lot of technical debt doing so.
The Economics of Knowing vs Not Knowing
AI does not eliminate the cost of building software - it redistributes it toward those who understand how to wield it. Many end-to-end agencies now build on top of established large language models for their reasoning depth and training scale. They integrate systems such as Claude and Codex into their workflows, and they bill clients with a surcharge in order to be profitable.
But structurally, the model resembles a restaurant. The restaurant purchases ingredients and applies a markup for preparation. If you know how to cook, you can buy the same ingredients and produce the meal yourself at lower cost. If you do not, you pay for the skill and equipment layered on top.
Similarly, foundational models are publicly accessible and are the ingredients for AI-assisted development. Without the knowledge and tools to prepare them efficiently for personal use, small issues compound, unnecessary prompts accumulate, and costs rise exponentially. This is where I found my domain knowledge to become more tangible in its purpose. AI did not flatten my expertise - it clarified its value
AI is a Junior, Not an Autopilot
The real breakthrough occurred when I shifted my attention to more developer-oriented tools: Cursor, Codex, and Claude. Unlike the previous services, these environments do not promise to “build your app for you.” These tools are made to be used deliberately, and guided by real developers. I noticed they behave strikingly similar to a junior developer - eager to learn, lots of energy to implement, but very literal.
The leverage lies not in asking for everything, but in knowing when to intervene. I maintained structured contextual files to anchor the models which were continuously adjusted to match the phase of development I was in. I used dictation to increase prompt fidelity to simulate real conversation. I avoided outsourcing trivial fixes that I could implement faster myself. I worked across parallel workspaces, chaining agents and linking design to backend context via MVP integrations.
I was not replacing engineering - I was orchestrating it. My role didn’t flatten - it shifted upward. The line-by-line implementation I was familiar with transformed into coordinated execution.
Revisiting an Old Idea
For years, I had noticed a recurring friction point in my family: deciding what to cook for dinner, who was home, what had been eaten recently, and how to avoid repetition. It was a simple problem - meaningful, but not urgent enough to justify six months of development under traditional constraints. With these timelines, the opportunity cost was too high. The idea remained dormant - until the constraints changed.
If a functional MVP can be built in a matter of weeks instead of quarters, experimentation becomes rational. I committed to a goal: one week to produce something usable in production. Thus, within seven days I had a complete MVP. Two days later, it was submitted to the App Store under the name Sorted., with Android release pending. Under previous constraints, I would not have attempted this build.
Compressed Validation Cycles
A few years ago, I built Dinner Party entirely solo over approximately eight months. That timeline reflected the reality of the tools available at the time. It required many iterations of careful architectural planning, algorithmic nuance, and sophisticated UI design. For those reasons, I remain proud of that accomplishment.
The current reality of establishing a production application so quickly isn’t the byproduct of reducing the complexity of the software itself, but rather the total collapse of the validation cycle. Traditionally, extensive upfront user research mitigated engineering waste. When development cycles spanned months, misalignment was expensive. Today, for lean teams and solo founders, the MVP itself becomes the research instrument.
If something can be built and deployed within weeks, theoretical validation is better repurposed as practical experimentation. This does not eliminate the value of research - it reframes the timing. When the cost of building falls, the cost of deliberation rises.
The Psychological Shift
Within the period of one week, I moved from seeing AI-assisted development as a threat to engineering, to recognising it as a lever within it. Software engineering is becoming less about manually constructing every component and more about coordinating intelligence effectively. The engineer’s role shifts upward - towards judgement and coordination, and further from writing every line by hand.
I built and launched a production application in seven days. That fact is less important than what it represents. The landscape has changed, and the constraints are different. The cost of experimentation has fallen dramatically. And in this environment, the question is no longer whether AI will influence how software is built - it already has. The real question is whether we choose to deliberately adapt, or continue to resist.