Monday, 21 February 2011

Measuring Testing

I saw a couple of tweets by @Lynn_Mckee recently on the metrics that are used in testing.

There are many great papers on #metrics. Doug Hoffman's "Darker Side of Metrics" provides insight on behavior. http://bit.ly/gKPHcj #testing

Ack! So many more that are painful... Scary to read recent papers citing same bad premises as papers from 10 - 15 yrs ago. #testing #metrics

And it made me think about how we measure testing.

This article is not going to be

'This is how you should measure testing’

or

offer any ‘best practice’ ways of measuring

My concern with any of the ways in which we measure is that it is done without context or connection to what question you which to have answered with the numbers. It is a set of numbers devoid of any information to their ‘real’ meaning. There are many and various debates within the software testing field about what should and should not be measured. My take on all of this is:

Can I provide useful and meaningful information with the metrics I track?

I still measure number of test cases that pass and fail and number of defects tests and fixed.

Is this so wrong?

If I solely presented these numbers without any supporting evidence and a story about the state of testing then yes it is wrong it can be very dangerous.

I view the metrics that are gathered during testing to be an indication that something might be correct or wrong, working or not working, I do not know this just from the metrics it is from talking to the team, debriefing and discussing issues.

I capture metrics on requirement coverage, focus area coverage, % of time spent testing, defect reporting, system setup. So I have a lot of numbers to work with which on their own can be misleading, confusing and misinterpreted. If I investigate the figures in detail and look for patterns I notice missing requirements, conflicting requirements and what is stopping me executing testing.

So what is this brief article saying?

Within the software testing community I see that we get hung up on metrics and how we measure testing and I feel we need to take a step back.

It is not too important what you measure but how you use and present what measurements you have captured. It is the stories that go with the metrics that are important, not the numbers.


4 comments:

  1. Say you have a strong story that gives context and details.
    Could you skip the measurements?
    Only report important information, no misleading numbers?

    ReplyDelete
  2. Rikard: - if that works in the situation/environment you are in then yes of course you could skip the measurements.

    However the article is my take on how I work and people still like to see numbers and coloured boxes. The skill for a tester is to make the numbers to be less a priority and for the explanation (story) to be the focus.

    ReplyDelete
  3. People in my environment also wants numbers.
    My kids want candy, but that doesn't mean they can have it.

    ReplyDelete
  4. Your kids don't pay you to give them candy ;o)

    The point of this article is not to say what you should or should not do. It is about the fact we get too hung up over what is the correct thing that should be measured. Instead I give the numbers etc that are required but back this up with a larger emphasis on the stories. It is how you present the information that becomes important to rather than just giving a set of percentages.

    ReplyDelete