Everyone's talking about vibe coding like it's the future of software. Andrej Karpathy coined the term, and now every AI coding tool is leaning into it. "Just describe what you want and the AI builds it." I've been doing exactly that for the past three months. Here's the part nobody's posting about on Twitter.
The 20-minute miracle
Two weeks ago I needed a webhook retry system for one of my SaaS products. Instead of writing it myself, I described the requirement in Claude Code: exponential backoff, dead letter queue after five failures, admin dashboard to replay failed hooks. Twenty minutes later I had working code across four files. Tests passed. The thing actually ran.
I shipped it. My users got webhook retries that afternoon. Three months ago this would've been a two-day task. The productivity gain is real and I'm not going to pretend otherwise.
Then the codebase started pushing back
The problem showed up a week later when I needed to modify the retry logic. The AI-generated code worked, but I hadn't written it. I didn't have the mental model of how the pieces fit together. The retry scheduler was coupled to the queue consumer in a way I wouldn't have designed. The admin dashboard made assumptions about the database schema that conflicted with my existing patterns.
I spent three hours untangling code I'd "shipped" in twenty minutes. Not because the code was bad. Because it was someone else's architecture inside my codebase. The AI doesn't know my conventions. It doesn't know I always separate side effects from business logic. It doesn't know my naming patterns.
This is the gap between vibe coding a prototype and vibe coding inside a living product.

Where I draw the line now
I've settled into a workflow that actually works. New features get a spec first — a markdown file describing the behavior, the constraints, and how it connects to existing code. I write this myself in about 15 minutes. Then I hand the spec to Claude Code and let it implement.
The spec is the difference. Without it, the AI optimizes for "working code." With it, the AI optimizes for "working code that fits my system." My rejection rate dropped from maybe 40% of generated code needing significant rewrites to under 10%.
For greenfield projects or throwaway prototypes, pure vibe coding is unbeatable. I built a landing page variant tester in an afternoon last week. Didn't write a spec. Didn't care about architecture. It runs, it collects data, and I'll throw it away in a month.
But for my core products with paying users and code I'll maintain for years, the spec-first approach isn't optional anymore.
The solo dev math
Here's my actual numbers from the past month. I tracked time on eight features across two products.
Pure vibe-coded features averaged 25 minutes to "working." Then 90 minutes of cleanup and integration work. Total: about 115 minutes per feature.
Spec-first features averaged 40 minutes total — 15 for the spec, 25 for implementation. Cleanup averaged 12 minutes. Total: about 52 minutes per feature.
The spec-first approach is literally twice as fast when you count the full lifecycle. The vibe-coded version just front-loads the dopamine hit and back-loads the pain.

The tool isn't the problem
I've seen people blame their AI coding tool when vibe coding goes wrong. Cursor's broken. Copilot hallucinated. Claude Code missed the context. Usually the tool did exactly what you asked. You just asked for the wrong thing.
The shift I've made isn't about tools. It's about treating AI coding like I treat any other powerful tool: with a clear brief. I don't hand a contractor a vague idea and expect production-ready work. I shouldn't do that with an AI agent either.
Vibe coding is real. It works. And if you're a solo dev shipping fast, it's the biggest leverage multiplier we've ever had. Just don't let the speed trick you into skipping the thinking. The thinking is still your job.
