From the course: Business Analysis: Essential Tools and Techniques
Tracing requirements through testing
From the course: Business Analysis: Essential Tools and Techniques
Tracing requirements through testing
- [Instructor] With the initial requirements for upgrading Landon Hotel's website now neatly organized in a spreadsheet, it's time to use this hard work and leverage it to help with the delivery of these needs. I'm going to expand the table here to include a few more columns, and I'll add in our verification items. We want verified and verified by, so in column F enter verified, (keyboard keys clacking) column G enter verified by. (keyboard keys clacking) And I'll expand these to fit to have a little room. (mouse clicking) And then we want validated and validate by. So in column H, validated, and column I, validated by. (mouse clicking) (keyboard keys clacking) Again, expanding out a little bit so easier to read. These are my reminder columns to review all the captured requirements with my stakeholders and confirm we have everything we need and only what we need. A review session can be really worth its time. We want to add in next the field that helps our stakeholders define their expectations. This is called the acceptance criteria, and I'll enter that in the next column. (keyboard keys clacking) Go ahead and expand it out, easier to read. Here's where traceability really gets cool. For each verified and validated requirement, you have a use case. So you simply ask the stakeholders what they're willing to accept that supports their use case and, therefore, the traced requirements. So I fill in literally what my stakeholders are willing to accept in each requirement acceptance criteria. (keyboard keys clacking) So going all the way back to our project objectives, our why's. Then we have our Whats with our deliverables, requirements, use cases, tracked all the way down to acceptance criteria. We can turn to the how. How would you know you've delivered the requirement? Well, what do you test? So we literally add a column for test case. I'll expand that out a little bit. And then I always put a test case description in the next column. (keyboard keys clacking) Again, expanding out the fit, and I'll even wrap the text for easier reading. This way, now, I start capturing the details of what we're testing, that we truly delivered the value-adding features. Say, for the smartphone requirement, I'm going to break up the acceptance criteria into small, more manageable tests. First, let's make sure the smartphone can access the website. (mouse clicking) So I'll add a test case, access via smartphone, (keyboard keys clacking) and I'll put a description in there, "Website is accessible from smartphone and displays all website features." So I know now what I'll be testing, but I'll add another row because there's more than one test case for requirement, and that's perfectly normal. I'll add another row and just ensure I copy down all the trace requirements. And so here I'll add another test case to subscribe to the sales promotions via smartphone, and then I'll enter a description of what I'm testing and validating here. On a mobile device, we want to be able to select, subscribe to sales promotions, able to enter our email address and select subscribe, and then get a confirmation screen that shows to the user. We're getting a lot of details here, but the traceability matrix is what helps keep you focused. (mouse clicking) If you're editing these items, we want to trace them all the way through testing. So now you add in an expected outcomes in the next column, I'll enter expected outcomes in column M. Here, now, in this column is where you can enter what you expect to happen. So, say, for our smartphone test case I'll enter the expected outcome of that on a smartphone, navigation and menu, subscription features, reservation, amenity views; they all display. And I'll do this now for every single test case, as this is where you define what is right or correct functionality that we want our testers to verify. (mouse clicking) We then add in the next column actual outcomes, and expand it to fit. This is where testers can capture what actually happened when testing, so that means I'll need another column. So now I'm going to add in a status column, status. This allows me to track the status of each test case. As for the results, if the expected outcome is the exact same as the actual outcome, then we know the test case has passed. And so I would simply put the status here as pass, I'll center for easy reading. (mouse clicking) If the expected and actuals, though, do not match, then the test case is a fail. And you'd want to now do some root cause analysis whenever test cases fail, but this status column is really helpful to help you manage your entire testing effort, as you could add in additional statuses such as not started and in progress. And so now you're not just managing requirements, you're tracking the entire delivery of those requirements by keeping them traced. And the coolest thing, if you've tested everything and they all pass, then you know you've delivered all the requirements. Why? 'Cause you chased them from start to finish.
Practice while you learn with exercise files
Download the files the instructor uses to teach the course. Follow along and learn by watching, listening and practicing.
Contents
-
-
-
(Locked)
The value of traceability to business analysis work4m 21s
-
(Locked)
Setting up your traceability matrix7m 14s
-
Tracing requirements through testing6m 11s
-
(Locked)
Traceability in agile2m 46s
-
(Locked)
Using AI to help with traceability3m 54s
-
(Locked)
Requirements management systems4m 19s
-
(Locked)
-
-
-