This article follows on from my previous article on Why we need to explore and looks at when we simplify the constructs of software development to expectations and deliverables how measuring test coverage becomes a difficult task. I should acknowledge that the model I use here is extremely simplified and is only to used to aid clarification. There are many more factors that are involved especially within the expectations section as Michael Bolton quite rightly commented about in the previous article.
If we go back to our original diagram (Many thanks to James Lyndsay) which shows our expectations and our deliverable and the it where they meet is where our expectation are met by the deliverable.
At a simple level we could then make the following reasonable deduction (at a simplified level)
If we go back to our original diagram (Many thanks to James Lyndsay) which shows our expectations and our deliverable and the it where they meet is where our expectation are met by the deliverable.
At a simple level we could then make the following reasonable deduction (at a simplified level)
We can express all our known expectations as 100% and therefore for measurement purposes say that x % of our expectations have been met by the deliverable and y % have not been met. This gives us a simple metric to measure how much of our expectations have been met. This seems very clear and could to some people be a compelling measurement to use within testing. The following diagram gives a visual reference to this.
This is only half the story since on the other side the part where we need to do some exploring and experimentation. This is the stuff in the deliverable that we do not know or expect. This is the bread and butter of our testing effort. The problem is since we do not know what is in this area or how big or small it is (I will return to that point later). We are now in a measurement discomfort zone, how do we measure what we do not know? The following diagram shows a visual representation of this.
This measurement problem is also compounded by the fact that as you explore and discover more about the deliverable you tacit knowledge can become more explicit and your expectations start to grow. So you end up in the following situation:
Now your expectation percentage is 100%+ and as you explore more continuously growing. So your % of meeting and not meeting your expectations becomes a misleading and somewhat pointless metric.
I was ask if there is anything that could be done to increase the area where the expectations are met by the deliverable and this lead to me adding another diagram as shown below.
**Still not to scale
Since testing in theory could be an infinite activity, how much testing we do before we stop is determined by many factors, Michael Bolton has a superb list as an article here.
In summary the amount we know and expect from a piece of software is extremely small in comparison to what we do not know about the software (deliverable) hence my first post in the article series on the need to explore the system to find out the useful information. We need to be careful when using metrics to measure progress of testing especially when that measurement appears easy to gather.
Further Reading on Metrics and Testing.
- But how many test cases?
- Test Cases Counting reflections
- Response on how many test cases
- Software Engineering Metrics: What Do They Measure and How Do We Know?
- The Darker Side of Metrics
- What counting measurements in testing
- Measurement Issues and Software Testing
- The Impossibility of Complete Testing
- Why 100% test pass rates are bad
- Test Estimation and the Art of Negotiation
- End to End Software Development Traceability with Mind Maps
- Reporting dashboard example
- Meaningful Measures
- Meaningful Metrics - Michael Bolton
- Three Kinds of Measurement (And Two Ways to Use Them)
- Issues About Metrics About Bugs
- Got you Covered.
- Cover or Discover
- A Map by any other name
No comments:
Post a Comment