AI Won't Replace Engineering Rigor, Only Expose Its Absence

This title was summarized by AI from the post below.
View profile for Saanya Ojha
Saanya Ojha Saanya Ojha is an Influencer

Everyone talks about how AI will change software development. We need to talk about what won’t change. If anything, AI is forcing a return to first principles, increasing the importance of the SDLC: ➰ If code is generated, specs need to be precise. Bad instructions = scalable garbage. ➰ If output is probabilistic, reviews need to focus on intent, not syntax. ➰ If no one wrote every line, testing becomes your only source of truth. AI is less a replacement for engineering rigor than an efficient machine for punishing its absence. The abstraction is rising, but the surface area of complexity is exploding underneath: more services, more dependencies, more non-deterministic behavior. Someone still needs to reason about architecture, failure modes, and tradeoffs. This isn’t a critique of the technology. It’s excellent and improving quickly. But using it well still requires systems thinking and technical judgment. Generating code is not the same as building software, in the same way producing words is not the same as making a legal argument. Engineering starts after the code exists: validating it, integrating it, deploying it, and ensuring it doesn’t break something downstream at 2AM. Today, everyone can build - but few can build systems that last. That distinction is about to get expensive. Engineering orgs run by non-technical leadership are especially likely to learn this lesson the hard way. Expect a wave of premature cost-cutting followed by rehiring cycles once the Sev 1s start stacking. If you don't understand the distinction between “more code” and “more reliability,” AI will eventually explain it to you in production. This dynamic is amplified by the internal politics of large orgs. Scope gets claimed by overpromising, timelines get pulled forward to win resourcing. AI pours fuel on this - making it easier to show rapid early progress and paper over weak foundations, at least temporarily. The bill still arrives. On the ground, engineers are already seeing the pattern. AI works best in existing codebases with strong patterns - it learns structure and extends prior decisions. But when used to “vibe code” from scratch, it often produces brittle systems: inconsistent abstractions, awkward interfaces, and code that looks fine until it’s asked to scale. So teams have two choices: 1️⃣ Let the system emerge through prompting → fast, messy, brittle 2️⃣ Define architecture and constraints upfront → slower, but durable Most teams are choosing (1). Not because it’s better - but because it’s incentivized. When output is measured in PRs and velocity, there’s little reward for designing clean systems or thinking ahead. Complexity gets deferred and paid back later with interest. AI is making coding easier but it won't make engineering less ‘technical’. Eventually, every builder will be forced to internalize the tenets of the SDLC: version control, testing discipline, structured iteration.

I think this is true across domains. Maybe we’re just started to experience it in software first. High quality writing is no different imho. It needs an outline, revisions to the outline, then drafts and revisions to the draft… it’s like dev. It has a process. To get great outputs from LLMs and reasoning models, we need to build the first principles of the domain in. That’s the “harness” that makes it work.

We run 8 AI agents off one codebase with a shared coordination file to stop them breaking each other's work. The agents that work are the ones with tight constraints, clear memory, and rules written from past mistakes. The ones we let "vibe code" from scratch caused more problems than they solved. Option 2 every time. The speed from option 1 is a loan, not a gift.

Had this exact conversation with a developer on my team. He kept asking for more and more detailed specs. I pushed back because we had already aligned at the team level on what needed to be built, and I said something he did not love: "If I understand the functionality well enough and Claude Code understands the codebase well enough, then somebody in between the two is superficial." The incentive problem runs both directions. Some engineers are becoming the bottleneck they used to warn against.

Most software today was architected and designed by humans to automate human-in-the-loop business processes. Humans are like the general-purpose computers that make bureaucratic organisation work. Hence, the typical software design processes. AI-native businesses will be much simpler to design and build once humans are out of the loop.

Like
Reply

Exactly...the real bottleneck is no longer generating code, it’s turning AI-built apps into secure, testable, deployable systems with clear architecture, cost control, and operational guardrails. The winners won’t be the teams that ship the most prompts, but the ones that can reliably graduate prototypes into production.

Strong take. AI seems to be shifting effort from writing code to defining systems clearly. Feels like the real bottleneck is moving from execution to judgment, especially around architecture and failure modes. Curious, do you think teams will adapt their incentives, or will most learn this only after systems start breaking at scale?

Like
Reply

Saanya Ojha Completely agree, but one piece you're missing IMO - it's not just "Define architecture and constraints upfront", but also enforce them. Upfront prompting doesn't equal certainty of outcome. Instructions to your agent (retry logic, timeouts, rate limiting, data encryption standards etc...) need to also be enforced across the codebase and as a check in the CI

Like
Reply
See more comments

To view or add a comment, sign in

Explore content categories