Tracking “escaped defects”?

Someone today asked if I had any experience of tracking “escaped” defects as a measure of quality.

Here are my thoughts. They don’t mean to cover the topic extensively, but I think they cover some very important points.

What’s the simplest thing you could do?

If what you are trying to do is to get an idea of how many “defects” make it to live when, actually, they could’ve/should’ve been spotted earlier, what’s the simplest thing that you could do? Could it be that counting them as an absolute number and track the number is enough?

Let’s assume that you do find a way of measuring them. Would that measure be valuable enough to infer something about the quality of your systems or the quality of your test approach? How could you use it to shift left?

If you are using test driven practices such as BDD, ATDD or Specifications by Example, what does it tell you about how well you are using them?

A difficult point, though, would be to decide what constitutes a released defect. It can be subjective or arguable. Is it something that you had specified acceptance tests for that just slipped through? Or is it something for which you did not think of an acceptance test? Is it something the PO thought was implied or obvious, but it wasn’t for the team? Is it a regression issue? Can you check if it existed in the previous version of the software? And so on. If you do decide to use any metric around this, I think it should be up to the team to define what a “live defect” is and isn’t.

Something to watch out for

Who would this metric be for? If it is wanted and measured by the team, for the team, it can be an excellent way to trigger questions that can help improve the way they work. If it is an external metric, then I would be very cautious (or even worried).

Focus on the right thing

A more general consideration is that while some metrics around quality can offer (or appear to offer) insights, the real game changer is to continuously look into how you can shift left, how you can get tests to document your stories and drive development, how these tests can continuously improve. And iterate. If a team does this well, quality will improve.

What are your thoughts?


Leave a Comment

Your email address will not be published. Required fields are marked *