I saw a couple of tweets by @Lynn_Mckee recently on the metrics that are used in testing.
There are many great papers on #metrics. Doug Hoffman's "Darker Side of Metrics" provides insight on behavior. http://bit.ly/gKPHcj #testing
Ack! So many more that are painful... Scary to read recent papers citing same bad premises as papers from 10 - 15 yrs ago. #testing #metrics
And it made me think about how we measure testing.
This article is not going to be
'This is how you should measure testing’
offer any ‘best practice’ ways of measuring
My concern with any of the ways in which we measure is that it is done without context or connection to what question you which to have answered with the numbers. It is a set of numbers devoid of any information to their ‘real’ meaning. There are many and various debates within the software testing field about what should and should not be measured. My take on all of this is:
Can I provide useful and meaningful information with the metrics I track?
I still measure number of test cases that pass and fail and number of defects tests and fixed.
Is this so wrong?
If I solely presented these numbers without any supporting evidence and a story about the state of testing then yes it is wrong it can be very dangerous.
I view the metrics that are gathered during testing to be an indication that something might be correct or wrong, working or not working, I do not know this just from the metrics it is from talking to the team, debriefing and discussing issues.
I capture metrics on requirement coverage, focus area coverage, % of time spent testing, defect reporting, system setup. So I have a lot of numbers to work with which on their own can be misleading, confusing and misinterpreted. If I investigate the figures in detail and look for patterns I notice missing requirements, conflicting requirements and what is stopping me executing testing.
So what is this brief article saying?
Within the software testing community I see that we get hung up on metrics and how we measure testing and I feel we need to take a step back.
It is not too important what you measure but how you use and present what measurements you have captured. It is the stories that go with the metrics that are important, not the numbers.