Vibe Coding Is Like Conducting Without Experience
In the first article, I described how AI has changed my role as a developer. It sometimes feels less like playing every instrument myself, and more like guiding an orchestra.
There’s another phenomenon that comes with this shift. A term that’s been floating around for quite a while now: vibe coding.
Vibe Coding, eh?
Vibe coding is when you build software by describing what you want, letting AI generate the code, and adjusting the result until it works.
You have a sense of what the final result should do, but you’re not necessarily thinking about every technical detail along the way.
In musical terms, it’s a bit like conducting without any experience. You know how the piece should sound. You try to guide the orchestra toward that sound. But you’re not entirely sure which instruments will play which parts.
And That’s Actually Great
It’s easy to criticize vibe coding, but honestly, I think it’s fantastic.
There are things a lot of people can now automate that used to be beyond their reach.
Ideas that previously felt too big, too time-consuming, or too experimental can now become working software in an afternoon. We can explore ideas faster, test concepts earlier, and get something running quickly.
And that’s still one of the biggest joys of programming: getting something to work.
Seeing an idea turn into a running system never really gets old. Vibe coding just lowers the barrier to that moment.
But Conducting Still Requires Experience
Where things become interesting is when the project grows. When you vibe code, you’re guiding the music without fully knowing the orchestra yet.
Sometimes the AI brings in a full orchestra when a small quartet would have been enough.
Sometimes the code runs, but the architecture is inverted—like asking the flutes to carry the bass line. It’s technically audible, but the resonance is all wrong.
And occasionally you ask for a short waltz — a quick prototype — and suddenly you’re looking at something closer to a two-hour symphony.
The system runs. The feature works. But you start to wonder whether the composition underneath it all really makes sense.
The Subtle Problems
As a vibe-coded application grows, certain issues become harder to see. Not because the AI doesn’t know about them, but because you didn’t explicitly ask for them.
Things like:
- Architecture: are the abstractions sound, or are we stacking layers on top of layers?
- Scalability: will this behave the same way with thousands of users?
- Security: beyond validation, have deeper attack vectors been considered
- Consistency: does the codebase follow clear patterns?
- Maintainability: will you understand this system six months from now?
AI often produces code that works. But working code and well-designed systems are not always the same thing.
A system can run perfectly fine while quietly accumulating complexity underneath.
Security Is a Good Example
Security is where this becomes very visible.
An AI might generate proper validation rules, escape output, and hash passwords correctly. That’s great. But deeper security thinking often lives at a different level.
Questions like:
- Are there rate limits on critical endpoints?
- What happens under automated attack?
- Is logging and monitoring in place?
- Are authorization boundaries tested?
Even more advanced ideas — like fuzz testing, privilege escalation scenarios, SQL injection, prompt injection or unusual edge-case inputs — rarely appear unless you explicitly ask for them.
The AI is an expert session musician; it will play exactly what’s on the page, but it won’t tell you the stage is on fire unless you ask it to check the temperature.
Is This Just the Next Evolution of Tools?
We are used to our tools being precise instruments, but vibe coding introduces a layer of 'creative' uncertainty. It raises a fundamental question: Is this just the next step in our evolution, or have we moved into entirely new territory?
In some ways, vibe coding reminds me of earlier moments when machines started doing work humans used to do.
Take calculators. There was a time when every calculation had to be done by hand. When calculators arrived, the mechanical part of the work disappeared overnight.
But calculators were easy to trust. They’re deterministic. Given the correct input, they always produce the same correct answer.
Large language models are different. They can generate impressive solutions, but they’re not deterministic. The same prompt can produce slightly different results. Sometimes they get things subtly wrong. Sometimes in your advantage, sometimes not.
That makes them powerful tools — but harder to depend on blindly.
Prototypes Are Cheaper Than Ever
One of the most powerful things vibe coding enables is rapid prototyping.
You can explore an idea quickly.
You can validate a concept.
You can build something that works.
That’s incredibly valuable.
But there’s a trap here.
Just because something works as a prototype doesn’t mean it should immediately become your production system.
A prototype is like a musical sketch. It captures the idea of the composition. But it usually needs refinement, restructuring, and careful orchestration before it’s ready for the main stage.
The same is true for software.
Learning the Orchestra
The real skill in vibe coding isn’t just prompting AI.
It’s learning how the orchestra behaves.
Which parts it handles well.
Where it tends to overcomplicate things.
Where subtle issues hide beneath working code.
That kind of understanding doesn’t appear automatically just because you’re using AI. You still have to build it the same way developers always have: by writing code, reading code, debugging systems, and seeing what breaks.
In other words: if you want to be the conductor, you still have to learn the instruments.
AI can help you produce the music faster. But understanding the composition — and knowing when something is off — still requires experience. And if you want to conduct the orchestra, you still have to learn how the music works.