http://clarotesting.wordpress.com/2010/07/21/but-how-many-test-cases/
and several people blogged a response
Simon Morley added his view here: http://testers-headache.blogspot.com/2010/07/test-case-counting-reflections.html
Jeroen Rosink added his here: http://testconsultant.blogspot.com/2010/07/repsonse-on-how-many-test-cases-by.html
and Abe Heward directed people to a similar blog he had wrote earlier: http://www.abeheward.com/?p=1
Each of these blogs makes very valid points about how non-useful measuring test cases are for indicating the progress or coverage of testing effort.
The aim of this blog is to try and expand upon these posts and see if there are ways in which we could measure testing effort and progress within resorting to using numbers.
To start with we shall take a look at a made up situation.
You are testing on a project which is has a two week testing cycle, your manager has requested that you need to report each day the following:
- How many test cases you have
- How many have been run
- How many have passed
- How many have failed.
(Does this seem familiar to anyone?)
So before you start testing you report to your manager that you have 100 test cases to run over the two week cycle
At the end of day one you report the following
- Test cases ran: 60
- Test cases pass: 59
- Test cases fail: 1
- Defects raised: 1
- Test cases still to run: 40
So management think cool we are ahead with testing, 60% done in one day.
At the end of day 2 you report:
- Test cases ran: 62
- Test cases pass: 61
- Test cases fail: 1
- Defects raised: 1
- Test cases still to run: 138
Management now thinks how come you only ran two test cases today, why are you running slowly? WHAT!!!! Where did those other 100 test cases come from? Did you not do your job correctly to begin with?
However the two you ran today had lots of dependencies and very complex scripts.
Plus your testers noticed that there appeared to be new features that had not been documented or reported, you have now had to add another 100 test cases. Also your testers actually think when they are testing and thought of new edge cases and ways to test the product whilst they were testing.
Management starts to panic – you reported on day one that 60% of testing had been completed. Now you are saying only 30% of the testing has been completed, Stakeholders are not going to happy when we report that we have only covered 30% when the day before I reported to them that 60% had been completed.
This continues, your testing team are really good testers and find more and test ideas which are turned into test cases. So at the end of day seven you report the following:
- Test cases ran: 1200
- Test cases pass: 1109
- Test cases fail: 91
- Defects raised: 99
- Test cases still to run: 10000
So at the end of the first week you have only completed 8% of all the test cases. You get fired for incompetence and the project never gets released.
Many people reading this may have experienced something similar to the above, what worries me that there are still people stating the best way to measuring testing is by the use of test cases!
The question now is that if measuring by the use of test cases is not a good way to measure then what can we do?
The following suggestions are my own and what I apply within my test approach, it does not mean it will work for everyone nor am I saying it is the best approach to take. However the purpose of my blog is to offer suggestions about testing that could be useful to some people.
I work in the following testing environment:
- Agile based – 2 week iterations
- Customer changing requirements frequently
- Code delivered daily
- Functions and features added without supporting documentation
- Use a mixture of scripted and exploratory testing
If I tried to report the testing effort using the traditional test case scenario it would be of little (or zero) value, since the test case number would be constantly changing.
What we do is split functions, features etc into test charters, as per the exploratory testing approach, these ‘Test Charters’ are known as the test focus areas of the software. If a new function or feature is discovered a new charter is created.
We then use the Session Based Test Management approach (James and Jon Bach - http://www.satisfice.com/sbtm/) and implement sessions based upon mission statements and test ideas. During the testing session the testers are encouraged to come up with new test ideas or new areas to test, these are captured either during the session or during debrief.
The reporting of progress is done at the test charter (test focus area) level. The test manager reports in the following way.
Test focus area 1: -Testing has started – there are a few issues in this area:
Description of Issue x, issue y, issue z.
Which need to be resolved before there is conference that is area is fit for its purpose.
Test focus area 2 – has been tested and is fit for it purpose
Test focus area 3 – test has started and some serious failures have been found defect 1, defect 2, defect 3
And so on.
Some people may ask but how will this tell us if we meet the deadline for testing? I am sure it will NOT tell you if you will finish ALL of your testing before the deadline since testing is an infinite thing, we as testers will carry on testing until we meet a stop heuristic (See Michael Bolton article on stopping heuristics: http://www.developsense.com/blog/2009/09/when-do-we-stop-test/).
The problem with testing is that it is not a yes or no when it comes to the question of have you completed your testing yet. Every time a skilled tester looks at the software they can come up with more and more test areas and test ideas that they could carry out. These may or may not add benefit to the suitability of the software and if it is fit for its purpose. What is required is a test manager that talks to and listens to their test team and see which test areas are the most important and MANAGE test sessions based upon what is critical – basically do some good old prioritizing. The test manger needs to ask the difficult questions of the stakeholder and project managers.
- What features can you do without?
- What are the critical areas that are required?
- Function abc has many serious problems – it can cause problems x,y,z for your users. Do you need function abc?
- We have tested all the key functions and found the following problems x,y,z. You want to release tomorrow, are you OK with these known issues?
In return the stakeholders and project managers must trust the test team and accept that when they report that an area has been ‘sufficiently’ tested they believe them.
To summarize – instead of reporting on a small area of testing such as test cases, move a couple of level ups and report on the progress for test areas/functions./features based upon the importance of the feature. This may not tell you if you will compete the testing before the deadline but it will show you how well the testing is progressing in each functional area at a level that stakeholders can relate to and understand. The trust your stakeholders will have in you should improve since you are giving them a story about the progress of the testing effort without trying to hide things using numbers.