Thursday, 31 March 2011

An update or two.

I noticed that I have not written a blog article in awhile so I thought I would put together a short article on what I have been up to so that regular readers can be sure that I am still alive and well.

On the personal front we have had a few health scares over the past month hence my lack of tweeting or blogging.

On the work front I have been very busy and involved in a few different and exciting projects while continuing to look at different ways in which we can improve.

During this period I have been looking more and more into ethnographic research and its connection to testing. I find this area of social science fascinating and how much it appears to collate to testing. Since there does appear to be a connection to this I am current running a couple of case studies internally based upon methods from ethnographic research as mention by Richardson in their article for Qualitative Inquiry: Evaluating ethnography

The findings for this case study will be presented at the UNICOM Next Generation Testing conference 18/19 May 2011

If you cannot make this event I do intend to give a very basic/quick introduction to this approach the Software Testing Club meet up in Oxford on the 14th April 2011 This event will be used as a world premier for the approach I have been working on so that definitely makes it worth attending. Or the fact that Lisa Crispin and Rob Lambert will be there should tick everyone’s box.

Without giving away too much detail before the meet up here is a brief summary of the approach I have been investigating

  • The concept is based upon questioning the tester as much as you question the product being tested.
  • It is check-list that can be used on an individual basis and should take between 5 and 10 minutes. The idea is to look at what you are doing and checking it is the right thing and see if you missing anything.
  • I will be giving away the check-list on the evening of the meet up. (wow a freebie)

Have I given away too much information, not enough or left you wanting more?

If you want to know more then I suggest you sign up to attend the meet-up or the UNICOM conference

Thursday, 10 March 2011

Is Context Driven Testing a gimmick?

My inspiration for this article has been a comment I received with regards to trying to organise an internal Rapid Software Testing course .

Someone made a comment that they felt that this ‘smacks of a gimmick’ but would be interested in finding out what people discover/get out of this in relation to the way we work.

The views expressed in this article are solely my own and are based upon my own experiences and knowledge of the testing profession.

Currently within the testing world there appears to be two schools of thought:

The traditional (standards) approach – driven by the ISTQB examination board (formerly the ISEB)

And there is the

Context Driven Testing concept– driven by such people as Cem Kaner, James Bach and Michael Bolton.

This article is not about entering a debate to say one approach is better than the other and IMO I think a good balance is a mixture of the various approaches. There are many other approaches and concepts to testing than the two mentions above such as the Agile and the analytical approaches but the main discussions within the testing community appear to mainly refer to the two 'schools' listed above.

The principles and concept of context driven testing is that the emphasis is about thinking, experiencing and doing rather than assuming and making interpretation of what people believe the system should do. It is more about ‘hands-on’ and learning about the system as you test.

A common misconception is that there is no planning involved within rapid software testing and that it is based upon ‘free’ testing and it is without structure or discipline. In my experience and from using the material from the rapid software testing course there is more planning, structure and focused based testing than with any other approach. The introduction of session based testing in which testers have a mission and a goal to aim for during their testing session ensures that testing remains focused and on track.

The difference between the two approaches in that the standard approach is mainly used to define testing (test cases) before you have actually have access to the system under test. There is an inherent weakness in this approach in that assumptions are made that requirements, design and user needs are correct, accurate and not missing.

Once actual testing has started the majority of testers revert to working in a context driven way. They adapt the scripts they have written; they think of new ones and make decisions on what not to run. The context driven approach is to have some lightweight upfront planning which is ambiguous and allows the tester the freedom to approach the testing by using their logic and adapting as they learn more about the system. This allows the tester to build up knowledge of the system while executing, creating new tests and recording what they are testing. This is the basic definition of exploratory testing. Time is not wasted on creating tests that will never be run or maintaining a list of test scripts with incorrect test steps. It is about recording what is happening at the time testing happens and storing the results of that testing session. This then can if possible be automated and as a test never run manually again.

The difference between the approaches is that rapid software testing requires testers to think as they test and not just tick boxes. It forces the tester to question what they see and allows them to freely explore and discover new things about the system. It uses triggers (heuristics) to keep asking the question "Is there a problem here?"

It does this by comparing the product with similar product, looking at the history of the product or the claims made by the product. These are tests in which there is no yes or no answer it depends on the context and the thinking of the tester.

Is context driven testing a gimmick?

IMO it is not

It is a natural way to test products and it is the way testing has been done since it became a career choice. However people have not admitted (or will not admit) to following this approach or be aware that they are doing this.

The whole concept and approach of rapid software testing is to give it a name and provide useful skills and tools to improve this methodology which follows the thinking of context driven testing.



After a lively discussion on twitter with James Bach I feel I need to clarify some misuse of definitions.

James gave a great description to show the differences between Rapid Software Testing and Context Driven:

Rapid Testing is a testing methodology that is context-driven.
But context-driven testing is not Rapid Testing.

After this 'revelation' I have made some minor changes to the original post.