I have seen on far too many occasions, whilst working
in testing, people spending months gathering together information and creating
test plans to cover every single requirement, edge case, corner case.
Some people see this as productive and
important work, in the dim and distance past I did too. I have learnt a lot
since then and now I personally do not think this is as important as people try
to make it out to be. I see this as a waste of
effort and time, time which would be better suited to actually testing the
product and finding out what it is doing.
This is not to say that ‘no’ planning is the right approach to take, rather that the test planning phase may
be better suited to defining what you need to do, to start doing some testing. It is more important to discover things that
could block you or even worse prevent you from testing at all. This article looks at the planning phase of
testing from my own personal experiences and viewpoint.
A starting point for this article was after re-reading
an article that Michael Bolton wrote for a previous edition of the Sticky Minds magazine called ‘Testing without a map'. Within this article Michael talked about using
heuristics to help guide your testing effort, at the time of that article he
suggested using HICCUPS to act as a guide to your testing and focus on inconsistencies. This was about useful approaches when
actually testing the product rather than on the planning phase. This article focuses on the before you
actually test.
Since the only way to know something is to experience
it and by experiencing what the software is doing you are testing
it. My own experiences is that normally there is a delay between what is being developed and
having something testers can test (yes even in the world of Agile) this is the
ideal time in which we should and can do some test planning. But what do we include in our plan? If we follow the standard IEEE standards for test planning we get the following areas recommended for inclusion in the test plan.
1. Test Plan identifier –
unique number, version, identification when update is needed (for example at x
% requirements slip), change history
2. Introduction (= management
summary) – in short what will and will not be tested, references.
3. Test items (derived from
risk analysis and test strategy), including:
a. Version
b. Risk level
c. References to the
documentation
d. Reference incidents
reports
e. Items excluded from testing
4. Features to be tested
(derived from risk analysis and test strategy), including:
a. Detail the test items
b. All (combinations of)
software features to be tested or not (with reason)
c. References to design
documentation
d. Non-functional attributes
to be tested
5. Approach (derived from
risk analysis and test strategy), including:
a. Major activities,
techniques and tools used (add here a number of paragraphs for items of each
risk level)
b. Level of independence
c. Metrics to evaluate
coverage and progression
d. Different approach for
each risk level
e. Significant constraints
regarding the approach
6. Item pass/fail criteria
(or: Completion criteria), including:
a. Specify criteria to be
used
b. Example: outstanding
defects per priority
c. Based on standards
(ISO9126 part 2 & 3)
d. Provides unambiguous
definition of the expectations
e. Do not count failures
only, keep the relation with the risks
7. Suspension and resumption
(for avoiding wastage), including:
a. Specify criteria to
suspend all or portion of tests (at intake and during testing)
b. Tasks to be repeated when
resuming test
8. Deliverables (detailed
list), including:
a. Identify all documents
-> to be used for schedule
b. Identify milestones and
standards to be used
9. Testing tasks for
preparing the resource requirements and verifying if all deliverables can be
produced.
10. The list of tasks is
derived from Approach and Deliverables and includes:
a. Tasks to prepare and
perform tests
b. Task dependencies
c. Special skills required
d. Tasks are grouped by test
roles and functions
e. Test management
f. Reviews
g. Test environment control
11. Environmental needs
(derived from points 9 and 10) for specifying the necessary and desired properties
of the test environments, including:
a. Hardware, communication,
system software
b. Level of security for the
test facilities
c. Tools
d. Any other like office
requirements
WOW – if we did all of this when would we ever get time to test? The problem is that in the past I have been guilty of blindly following this by using test plan templates and lots of cut and paste from other test plans.
Why?
This was how we had always done planning and I did not question if it was right or wrong or even useful. Mind you in the back of my mind I would think why we are doing this, since nobody ever reads it or updates it as things change. Hindsight is a wonderful thing!
My thoughts and thinking over what we really need to
do when planning has changed drastically and now I like to do enough planning
to enable a ‘thinking’ tester to do some testing of the product. The problem we face with our craft is that we make
excuses to not do what we should be doing, which by the way, is actual
testing. We try to plan in far too much
detail and map out all possible scenarios and use cases rather than on what the
software is doing. Continuing on with
the theme of ‘The Map’ from the article by Michael Bolton, Alfred Kozbskit once stated that
“The map is not the territory”
As a reader of this article what does that imply to
you?
To me it was an epiphany moment, it was when I
realised that we cannot, nor should not, plan what we intend to test in too
much detail. What Alfred was trying to
say with this statement was that no matter how much you plan and how detailed
your plan is it will never match reality.
In some ways it is like designing a map with a 1:1 scale. How useful would you find this kind of map to
get around? Would it be of any use? Would it actually map the reality of the
world you can see and observe? It would
not be dynamic so anything that has changed or moved would not be shown. What about interactive objects within the
map? They are constantly changing and
moving and as such by the time you get hold of the map it is normally out of
date. Can you see how that relates to
test plans?
What this means in the reality of software testing is that
we can plan and plan and plan but that gives no indication on the reality of
the testing that we will actually do. After having a discussion with Michael
Bolton on Skype he came up with a great concept and said we need to split
planning time up into test preparation and actual planning.
You need to spend some time getting ready to test,
getting your environments, equipment, and automation in place, without this in
place you could be blocked from actually starting to do some testing. This is vital work and far more important
than writing down step by step test scripts.
The purpose of testing is to find out information and
the only way to do this is to interact with the application. It is said that most things are discovered by exploration and accident than by planning for something and that something
happening is more than likely a coincidence.
The problem with doing too much planning is that it becomes out of date
be the time you get to the end of your testing.
It is much better to have a dynamic adaptive test plan that changes as
you uncovered and find more to test. One of the ways I have adopted this is by
the use of mind maps and there have been many articles in the testing community
about this subject, I would suggest if you want to know more about this is that
you go and Google ‘mind maps and software testing’
The problem we have is that people are stuck in a
mentality that test cases are the most important thing that needs to be done
when we start to do test planning. There
is a need to move away from test cases towards missions (goals) something that
you could do and achieve in a period of time and something that more
importantly is reusable and that how it will be used will depend on the context
and the person doing the mission. When
planning you only need to plan enough to start testing (as long as your test
prep has been done) then when you test you will uncovered interesting
information and start to map out what you actually see rather than what you
thought you may see. Your test plan will
grow and expand as you become information and knowledge rich in what you find
and uncover.
Accurate forecasts aren't possible because the world is not
predictable
So it is wise to not plan too far ahead and plan only
enough to do some testing find out what the system is doing and adjust your
planning based upon the hard information you uncover. Report this to those that matter. The information you find could be what is
valuable to the business. Then look for
more to test, you should always have a backlog and the backlog should never be
empty. The way in which I do this is to
report regularly what we have found and what new missions we have generated
based upon the interesting things we came across. I then re-factor my missions
based upon
- The customer priority – how important is it that we do this mission to the customer
AND
- The risk to the project - if we did this mission and not one that we have planned to do next from the backlog what risk is this to the project?
To summarise we need to think more about how much
planning we do and think critically if producing endless pages of test cases
during test planning is the best use of our resources. We need to plan enough to do some testing and
adapt our test plan based upon the information we uncover. There is a need to re-evaluate what you
intend to do often and adapt the plan as your knowledge of the system
increases. It comes down to stop trying
to map everything and map just enough to give you a starting point for
discovery and exploration.
*Many thanks to Michael Bolton – for being a sound
board and providing some useful insights for this article.