Over the last 40 years my teams and I have evolved a 10-step software quality strategy. It's worked *really* well for us - I thought I'd throw it out there for your thoughts. 1. Ensure the entire Scrum team *refines the backlog*. This helps ensure everyone is on the same page. Few things are more costly than the team building the wrong thing and throwing away code. 2. Write *test plans* using a behavior-driven development (BDD) style (given…when…then) or similar. This helps "pre-visualize" the story in an as-built state. "Painting done" helps everyone understand what to build by describing what the behavior should be once it's implemented. (Pro tip: have the product owner review them to make sure the vision matches!) 3. Write both front end and back end *unit tests*, ensuring all logic/calculations/flows are covered. This is the gift that keeps on giving because all unit tests are executed with every code commit (or at least...they should be), allowing you to quickly detect if something got broken due to a recent modification. 4. Perform *peer code reviews* and address issues. Some studies have demonstrated an 80 PERCENT OR MORE reduction in defects by leveraging code inspections. A nice side benefit? Teams move faster because they're saddled with fewer bugs. 5. *Manually test* each story as it's completed during the sprint and fix any bugs. Using the test plans from step 2, manually test each story to ensure the as-built behavior matches the vision. 6. Have the *product owner* use the functionality for the story and make adjustments as necessary. Kicking the tires ensures the implemented story matches their vision. 7. Implement *code-driven integration testing*. This ensures that the front end can talk to the back end, the back end can talk to the database and other services, and everything is stitched together as it should. (Pro tip: do NOT try to test all the same scenarios already tested by front-end and back-end unit tests.) 8. Implement *UI-driven smoke testing*. This should be the LIGHTEST amount of testing, because UI-driven tests are costly to build, costly to maintain, and extremely brittle. Focus tightly only on UI-specific functionality. Everything else should be exercised via unit tests or code-driven integration tests. 9. *Demonstrate* "done done" stories to stakeholders every 2 weeks to get their feedback. They'll often pick up on subtleties everyone else missed. 10. Have stakeholders (or better yet, customers), *use the product increment* in a demo environment, refreshed every 2 weeks after the review. Only when the product is used do you really know it's working. What would you add (or change)?
How to Build a Software Quality Engineering Strategy
Explore top LinkedIn content from expert professionals.
Summary
A software quality engineering strategy is a plan that helps teams build software that works reliably and meets user needs by making quality a focus at every step, rather than something tested only at the end. This approach combines careful planning, good design, continuous testing, and teamwork to prevent bugs and deliver trustworthy products.
- Build quality in: Encourage everyone on the team to share responsibility for quality from the start by using clear requirements, thoughtful design, and a mix of manual and automated tests throughout development.
- Test early and often: Start testing as soon as development begins, use a variety of techniques to catch potential problems early, and keep reviewing your testing approach to make sure it stays useful and trustworthy.
- Focus on real users: Regularly validate software in real-world scenarios, involve stakeholders in reviews, and use feedback to guide improvements so that the product truly meets user expectations.
-
-
As a client project manager, I consistently found that offshore software development teams from major providers like Infosys, Accenture, IBM, and others delivered software that failed 1/3rd of our UAT tests after the provider's independent dedicated QA teams passed it. And when we got a fix back, it failed at the same rate, meaning some features cycled through Dev/QA/UAT ten times before they worked. I got to know some of the onshore technical leaders from these companies well enough for them to tell me confidentially that we were getting such poor quality because the offshore teams were full of junior developers who didn't know what they were doing and didn't use any modern software engineering practices like Test Driven Development. And their dedicated QA teams couldn't prevent these quality issues because they were full of junior testers who didn't know what they were doing, didn't automate tests and were ordered to test and pass everything quickly to avoid falling behind schedule. So, poor quality development and QA practices were built into the system development process, and independent QA teams didn't fix it. Independent dedicated QA teams are an outdated and costly approach to quality. It's like a car factory that consistently produces defect-ridden vehicles only to disassemble and fix them later. Instead of testing and fixing features at the end, we should build quality into the process from the start. Modern engineering teams do this by working in cross-functional teams. Teams that use test-driven development approaches to define testable requirements and continuously review, test, and integrate their work. This allows them to catch and address issues early, resulting in faster, more efficient, and higher-quality development. In modern engineering teams, QA specialists are quality champions. Their expertise strengthens the team’s ability to build robust systems, ensuring quality is integral to how the product is built from the outset. The old model, where testing is done after development, belongs in the past. Today, quality is everyone’s responsibility—not through role dilution but through shared accountability, collaboration, and modern engineering practices.
-
🚨 “It worked on my machine!” … until it didn’t. Every engineer has lived this nightmare. That’s why real quality starts long before a release. 💡 Are you serious about “quality”? As an engineering lead, I’ve learned that quality is not a checklist - it is a mindset you bring to every stage of development. Here’s what great developers focus on: 1️⃣ Requirement clarity - Know what you are building and why. 2️⃣ Strong design principles - Good software and code design go a long way. 3️⃣ Test thinking early - Map positive and negative cases before writing code. 4️⃣ Code for the future - Scalable, readable, maintainable. 5️⃣ Unit tests - Your first safety net for critical logic. 6️⃣ Integration tests - Catch those sneaky edge cases. These have saved my team from major outages. 7️⃣ Manual testing - Nothing beats a human eye. 8️⃣ Team testing - A fresh perspective always finds what you missed. And yet… 🔁 Murphy’s Law still applies: Anything that can break, will break. That’s why quality is not a phase at the end - it is a culture from day one. As engineering leaders, our job is to build teams that own quality - where every developer feels responsible for shipping code that lasts. Just last week, my team chose a long-term solution over a quick fix, and the payoff will outlive any sprint deadline. How do you bake quality into your development process? Comment below 👇 #EngineeringLeadership #CodeQuality #SoftwareEngineering #DevEx #QualityCulture
-
Don’t Focus Too Much On Writing More Tests Too Soon 📌 Prioritize Quality over Quantity - Make sure the tests you have (and this can even be just a single test) are useful, well-written and trustworthy. Make them part of your build pipeline. Make sure you know who needs to act when the test(s) should fail. Make sure you know who should write the next test. 📌 Test Coverage Analysis: Regularly assess the coverage of your tests to ensure they adequately exercise all parts of the codebase. Tools like code coverage analysis can help identify areas where additional testing is needed. 📌 Code Reviews for Tests: Just like code changes, tests should undergo thorough code reviews to ensure their quality and effectiveness. This helps catch any issues or oversights in the testing logic before they are integrated into the codebase. 📌 Parameterized and Data-Driven Tests: Incorporate parameterized and data-driven testing techniques to increase the versatility and comprehensiveness of your tests. This allows you to test a wider range of scenarios with minimal additional effort. 📌 Test Stability Monitoring: Monitor the stability of your tests over time to detect any flakiness or reliability issues. Continuous monitoring can help identify and address any recurring problems, ensuring the ongoing trustworthiness of your test suite. 📌 Test Environment Isolation: Ensure that tests are run in isolated environments to minimize interference from external factors. This helps maintain consistency and reliability in test results, regardless of changes in the development or deployment environment. 📌 Test Result Reporting: Implement robust reporting mechanisms for test results, including detailed logs and notifications. This enables quick identification and resolution of any failures, improving the responsiveness and reliability of the testing process. 📌 Regression Testing: Integrate regression testing into your workflow to detect unintended side effects of code changes. Automated regression tests help ensure that existing functionality remains intact as the codebase evolves, enhancing overall trust in the system. 📌 Periodic Review and Refinement: Regularly review and refine your testing strategy based on feedback and lessons learned from previous testing cycles. This iterative approach helps continually improve the effectiveness and trustworthiness of your testing process.
-
Testing isn’t about proving what works—it’s about uncovering what breaks before the user does. Strong QA practices go beyond checklists. They anticipate risks, challenge assumptions, and protect user trust. > Test like a real user, in real conditions > Start testing early—shift left to catch issues sooner > Automate repetitive and regression checks to save time and reduce Human error > Prioritize high‑risk, high‑impact areas where failures matter most > Keep test cases clear, concise, and easy to maintain > Validate across different environments, browsers, and devices > Use realistic, imperfect data to simulate real‑world scenarios > Recheck fixes to prevent regressions from creeping back in > Explore creatively to uncover unexpected issues > Push the system’s limits to reveal hidden weaknesses Quality isn’t just about passing tests—it’s about building confidence in the product. When QA is treated as a strategic partner, teams deliver not only faster but smarter, with fewer surprises in production. #QAEngineering #SoftwareTesting #QualityMatters #TechCulture #Automation
-
If your requirements live outside the tools your teams use to design and validate, you’re managing blind. That’s when change sneaks in, spreadsheets drift, and decisions get made on stale context. I’ve watched capable teams burn weeks on late change notices and heroic integrations because the requirements weren’t connected to the work. The cost of ignoring this is well documented. A landmark study by Texas Instruments found runaway costs 7 out of 10 times when teams failed to keep requirements current. Projects that relied on documents or databases alone still saw high runaway rates. Add that test and integration often consume 50 to 60 percent of the lifecycle, and you can see why linking each requirement to test cases and design items is non‑negotiable. Another lesson from that research. Nearly all of your cost is locked in by the time you hit development. Decisions made early, with live requirements, decide whether your program will be late or lean. Here’s the practice that consistently stabilizes complex, multi‑site programs. Treat requirements as a living system. Map each requirement to a specific design item, a test case, and a program constraint. Make the system web‑accessible and usable from common desktop tools so every entitled person can read, edit, and trace without a learning curve. Use notifications to flag parts and schedules at risk when a requirement changes. The payoff for energy and utilities teams is concrete. Faster change assessment because every requirement has a home. Shorter test cycles because reruns automatically verify compliance. Better supplier conversations because requirements arrive early enough to adjust parts or propose alternatives. Most important, quality is designed in upstream, not inspected downstream.
-
Some organisations are drowning in technical debt. - A "5-minute fix" turns into 3 days of effort. - Adding new features feels like playing with a house of cards. - That "quick fix" from 2 years ago? Now it runs your entire business. - Every team meeting is a battle: Feature vs. Quality - Your team is getting slower by the month. - A tiny change takes weeks to test It doesn't have to be like this. And 𝗶𝘁'𝘀 𝗰𝗲𝗿𝘁𝗮𝗶𝗻𝗹𝘆 𝗻𝗼𝘁 𝘁𝗵𝗲 𝗱𝗲𝘃𝗲𝗹𝗼𝗽𝗲𝗿𝘀' 𝗳𝗮𝘂𝗹𝘁𝗅 Usually, it's the result of 𝗺𝗶𝘀𝗽𝗹𝗮𝗰𝗲𝗱 𝗽𝗿𝗶𝗼𝗿𝗶𝘁𝗶𝗲𝘀 and a misguided organizational development culture. Some root causes: - "Just make it work for the demo" - Fixing technical debt rarely gets priority - Immediate deadlines trump long-term code health - Leaders are rewarded for short-term results - Sales teams promising features without consulting engineering - Career advancement based on visible new features - Business leaders don't trust engineering time estimates - Engineers don't trust management's promises to "fix it later" Here are 5 clear, actionable recommendations: 1. Make system health a product feature, not just a technical problem. 2. Measure and demonstrate how technical debt is slowing down your business. 3. Include clean-up time in all feature estimates - no exceptions. 4. Never bypass quality checks, even under pressure. 5. Reward teams for preventing problems, not just fixing them. When technical health becomes part of your product strategy, your teams move faster, your systems stay reliable, and your business grows stronger. Best of all? Your developers come to work - excited to build great things instead of dreading the next fire. - proud of their work instead of apologizing for it. - energized to create instead of stressed about breaking things. "When did technical debt hit your organization the hardest? Share your story below - whether it's a horror story or a success story. 👇 P.S. In Sophie's and my forthcoming book, we share our experiences of how to develop good software sustainably. #devops #technology #qualit