Why Vibe Coding Feels Right — But Isn't Ready (Yet)

Why Vibe Coding Feels Right — But Isn't Ready (Yet)

The promise of AI-first programming is seductive

It started with a simple request: “Add a dark mode toggle.”

The AI-generated pull request looked clean. It touched all the right files. The code passed tests. I stared at it, unsure whether to smile or worry.

I hadn’t written a single line. I hadn’t even reviewed it thoroughly. I just… accepted the vibe.

Welcome to vibe coding — the uncanny new layer of abstraction in software engineering.

Coined by Andrej Karpathy, vibe coding describes a programming style where developers describe what they want and trust the AI to write the code. Not assist — write. And it works disturbingly well, especially for small, isolated tasks such as prototypes.

But here’s where I diverge from the excitement: Vibe coding is not software engineering. Not yet. And if we treat it like it is, we risk outsourcing our judgment to a system that lacks it.


The abstraction is real — but so is the responsibility

In academic terms, vibe coding is a leap in abstraction and delegation. Just like:

  • Assembly gave way to high-level languages
  • Declarative UIs replaced imperative DOM manipulation
  • ORM libraries let us "skip" SQL

Now, AI is offering us an even higher-level interface: natural language.

But with every abstraction, there’s a tradeoff.

The difference is that with previous abstractions, we still knew what was underneath. We chose to ignore the boilerplate, but we could dig when we needed to.

With vibe coding?

We’re not just skipping steps — we’re skipping understanding.

And that changes the developer’s role in a way the industry hasn’t fully reckoned with.


When you stop writing code, you start managing unknowns

Let’s take a concrete scenario from a startup I recently advised. They were building a backend service using vibe coding via Copilot and Cursor.

The AI generated functions to handle file uploads, validate user metadata, and store it in a cloud database. The developer running the show was smart — they knew the domain — but they weren’t reading the code anymore.

When something broke in production (a rare edge-case on metadata parsing), debugging became a nightmare.

  • Nobody fully understood the AI-generated logic.
  • The function names were fine, the structure looked okay, but intent was missing.
  • Logging wasn’t standardized. Error messages were vague.

What should have taken an hour to patch ballooned into a full-day investigation.

And this is the core issue:

AI doesn’t model intent. It approximates syntax.

It writes plausible code. Often correct-looking code. But correctness isn’t enough. We need:

  • Justification (Why was this approach chosen?)
  • Alignment (Does it match the architecture?)
  • Explainability (Can the next developer maintain it?)

Without these, you’re inheriting code you didn’t write and don’t understand — even if it was your own prompt that produced it.


The cognitive gap is widening

Ironically, the more capable these AI tools become, the less mental effort developers put into crafting their solutions.

I’ve seen this in student projects, even in mid-sized teams:

  • Devs prompt AI for features instead of designing them.
  • AI generates UI + backend glue in seconds.
  • Nobody reviews the output with engineering skepticism — because “it works.”

But working is a low bar. We don’t write software just to “make it run.” We build for:

  • Clarity
  • Maintainability
  • System-level behavior

These are not emergent properties of vibes.

One developer told me, “I’m way more productive with Copilot — I just don’t remember how I got anything done.” They meant it as a joke. I didn’t laugh.


This is a moment of paradigm shift — not immaturity

To be clear: I’m not anti-vibe coding. I’m cautiously optimistic — but only if we treat it as a shift in engineering practice, not a shortcut.

If you squint, you’ll see echoes of:

  • Model-Driven Engineering, which also promised generation from higher-level input.
  • Low-code platforms, which aimed to empower “citizen developers.”
  • Even test-driven development, which inverted the way we approached implementation.

What’s different now?

Vibe coding is language-native, interface-free, and general-purpose. It promises to let anyone build anything — just by saying it out loud.

And that’s the dangerous beauty of it.

We’re no longer constrained by syntax. The bottleneck now is our ability to express what we mean — and to verify that the AI interpreted it properly.

That’s a much harder skill.


So what should we do instead of just vibing?

1. Adopt AI as a collaborator, not a creator

Use vibe coding for:

  • Spikes
  • Exploratory branches
  • Throwaway scripts
  • Boilerplate scaffolding

But keep your engineering hat on. If you can’t explain what the AI wrote, don’t ship it.

“Would I approve this PR if it were written by a junior dev?” That’s your new vibe coding litmus test.

2. Shift your team's rituals, not just tools

  • Add prompt reviews to code reviews.
  • Ask engineers to explain why they prompted a certain feature, not just what the prompt was.
  • Encourage reading diffs, not just accepting AI-generated suggestions.
  • Tag AI-generated code explicitly (#AIgen) so future maintainers know what they’re reading.


3. Teach abstraction literacy

In education, we need to stop pretending LLMs don’t exist.

Instead:

  • Teach how to critique AI-generated code.
  • Show how to design prompts that reflect design intent.
  • Assign projects where students must debug or refactor AI code.

If we don’t teach this now, we’ll end up with a generation of developers who can prompt, but can’t reason.


Final thought: This is not about tools — it’s about trust

The message is clear: vibe coding isn’t just a new tool — it’s a shift in how we think, communicate, and construct software systems.

We’re no longer just typing code. We’re shaping prompts, curating outputs, and navigating abstraction at a higher level. But that shift comes with risk — and responsibility.

Too many teams treat AI-assisted development as a productivity boost, not a transformation in engineering practice. And that’s where we need to raise the bar.

So, what’s your next move?

If you’re a software engineer or team contributor: Don’t just “go with the vibe” — slow down and question what’s behind the generated code. Get comfortable reading diffs, challenging AI output, and thinking critically about quality and intent. Learn how to design effective prompts — but also how to detect when the AI got it wrong. In the era of automated code, judgment is your most valuable skill.

If you’re a team lead, architect, or tech lead: Treat AI coding as a process that requires oversight, not just trust. Build team rituals for reviewing AI-generated code. Encourage knowledge sharing around what prompts work — and what traps your team has encountered. Define when vibe coding is acceptable (prototypes? boilerplate?) and when it must be replaced by rigor. Great teams won’t just use AI — they’ll audit it.

If you’re a CTO, VP of Engineering, or organizational leader: Shift the conversation from tools to capability. Don’t just track AI adoption — invest in your teams’ ability to guide, verify, and adapt to AI-generated software. Create policies around security, code ownership, and review for AI-written contributions. Your future resilience depends not on how fast your team codes — but how well it collaborates with non-human contributors.

The future of software won’t be written line by line — it will be shaped through intent, abstraction, and critical interpretation. The teams that thrive won’t just move fast — they’ll think deeply, verify continuously, and never outsource responsibility to the vibe.

That’s where I can help.

I work with organizations to make sense of AI’s role in software engineering — through workshops, strategy development, and research-backed guidance on tooling, team practices, and ethical adoption.

👉 Learn how to navigate the future of software: danielrusso.org/evidence-based-organizational-change

How is your team approaching AI-generated code?

💬 Let’s exchange notes in the comments.

#SoftwareEngineering #VibeCoding #AIinDev #TeamPractices #EngineeringLeadership #PromptEngineering #AgileTech #FutureOfWork

Great insights, Daniel. I would personally say, only use LLMs to produce code you 1) can verify with a reliable approach or 2) intend to use as a toy or mock up you promise to throw away and never use for anything serious.

To view or add a comment, sign in

More articles by Prof. Dr. Daniel Russo

Others also viewed

Explore content categories