Showing posts with label SBTM. Show all posts
Showing posts with label SBTM. Show all posts

Monday, 27 January 2014

Measuring Exploratory Testing

A quick post on a concept we are working on within our company

One of the difficulties I have found with implementing Exploratory Testing is a way to measure how much testing you have done at a high level (stakeholders).  This article looks at this problem and tries to provide a solution, it should be noted that there are good ways currently of reporting automation (checking) and for this article that will be out of scope.

The current way of we manage exploratory testing is by using time boxed sessions (session based test management) and for reporting at a project level we can (and do) use dashboards.  This leaves the question open of how much testing (exploratory) has been done against the possible amount of testing time available.

After having discussions with some work colleagues, we came up with the following concept (this was a great joint collaboration effort,I cannot claim the ideas as just mine).  The basic concept of session based test management is that you time box your exploration (charters) in to sessions where one session equates to one charter (if you have not come across the terminology of charters then refer to the session based test management link) To simplify we use an estimation that one session is half a day (sometime you do more, sometimes less), therefore we now have a crude way to estimate the possible number of charters you could run in a period of time.

For example if you have a sprint/iteration of two weeks you have per person a possible number of sessions you could run of 20 sessions, if you have 5 testers then you could have a total possible number of sessions of 5 * 20 = 100 sessions within your sprint.  No one on a project would be utilized like this for 100% of the time so the concept that we came up with is that for your project you set a target of how much of your time you want your team to be doing exploratory testing.  The suggestion is to begin with to set this to a value such as 25%, with the aim to increase this as your team moves more and more into automation for the checking and exploratory for the testing, the goal being a 50/50 split between checking and testing.

Using the example above we can now define a rough metric to see if we are meeting our target (limited by time)

If we have 2 weeks and 5 testers and a target of 25% exploratory we would expect by the end of the two weeks if we are meeting our target to have done: 25 exploratory sessions.

We can use this to report at a high level if we are meeting our targets within the concept of exploring, within a dashboard as shown below,

Possible sessions100
% Target Sessions25%
Number of actual sessions25
% Actual Target25%
Following this format we can this using colours to indicate if we are above or below our target: (red/green)

Possible sessions100
% Target Sessions25%
Number of actual sessions15
% Actual Target15%
We feel this would be a useful indication of the amount of time available and the amount of time actual spent doing exploratory testing rather than checking (manually or automated)

There are some caveats that go with using this type of measurement.

Within session based test management the tester reports roughly the amount of time they spend:
  • Testing
  • Reporting
  • Environment set-up
  • Data set-up
This is reported as a percentage of the total time in a session, therefore more detailed reporting can be done within a session but we feel this information would be of use at a project level rather than a stakeholder level.  This is something that, if it would be of use to stakeholders we could revisit and come back to.

Your thoughts on this concept would be most welcome and we see this as a starting point for a discussion that hopefully will provide a useful way at a high level to report how much time we are spent testing compared to checking.

We am not saying this will work for everyone but for us it is ideal way of saying to stakeholders that of all the possible time we could have spent testing (exploratory), this is the amount of time we did spend and the associated risks that may have.

Tuesday, 27 August 2013

The ‘Art’ of Software Testing

I was recently in Manchester, England when I came across the Manchester Art Gallery  and since I had some time spare I decided to visit and have a look around.  I have an appreciation for certain styles of art especially artists such as William Blake  and Constable.  During my visit I had a moment of epiphany.  Looking around the different collections, this appeared to be set out in a structured style, with different styles collated together.  Apart from an odd example of the famous “Flower Thrower" artwork by Banksy being placed in the seventeenth century art collection area.  I wondered if this was a deliberate action to cause debate.

What struck me when looking around was the fact that even though there were many similar painting techniques and methods being applied there was no standard size for any of the painting on display.  I looked around and could not find two paintings that appeared to have the same dimensions.  I even started to think if there was a common ratio being used and the so called golden ratio. Looking around quickly I could see some aspects of ratios being used but to my eyes it appeared that even though the artists used similar approaches and techniques they were ‘free’ to use these methods as a guide to producing their masterpieces.

This made me think about the debates in the field of software testing and how we should be taking on board engineering processes and practices.  If this is the case and we try to rein the imagination how are we supposed to hit upon moments of serendipity and be creative? I agree there needs to be structure and some discipline in the software testing world Session based testing takes account of this.

We have common methods and techniques that we can use in software testing but how we apply this should surely be driven by the context of the project.  I believe we need to stop or resist processes and best practices that hinder or suppress innovation and prevent those moments of enlightenment in which deeply hidden ‘bugs’ are discovered. Following pre planned and pre written test steps by rote can be useful but if we all follow the same path how many wonderful things are we going to miss.  I liken it to following a map or a GPS system and not looking around you at the amazing landscape that you are not noticing or appreciating.  In our field of software testing we must allow testers to explore, discover and sometimes find by accident.  We need to stop forcing processes, best practices and standards upon something which is uniquely human the skill of innovation and discovery.

The title of this article is homage to one of the first books I read about software testing by Glenford Myers – The Art of Software Testing


Tuesday, 11 January 2011

The Feedback Loop

One of the critical elements of following the session based test management (http://www.satisfice.com/sbtm/) approach is the use of quick feedback. To achieve this it is suggested that a debrief should be done at the end of each session/day. Jon Bach (http://www.satisfice.com/articles/sbtm.pdf) suggest the use of PROOF

Past. What happened during the session?
Results. What was achieved during the session?
Obstacles. What got in the way of good testing?
Outlook. What still needs to be done?
Feelings. How does the tester feel about all this?

This approach is excellent for communicating what has happened during the testing session(s), however I keep hearing that people are not doing the debrief . There are many reasons why these are not being done, lack of time/resource or see no benefit are a few of the reasons given. This blog post is why it is important to carry out these debriefs and ensure they are done sooner rather than later.

I am looking at this from a psychology viewpoint to highlight the way our minds work and to keep reminding readers that software testing is a human sapient process and not an automated ticking of boxes process.

There are various studies that have indicated that the longer you take to act upon information the less you are able to recall that same information at a later date. During Eurostar 2010 Graham Freebur stated that unless you act upon information you had digested at the conference then within 72 hours that information would start to be lost and fade. The crucial part of this is that as humans we are fallible and lots of different psychological biases start to play with our minds so unless we can talk and pass on the information we have as soon as possible the more likely that the data we have will become clouded.

It is important that we debrief to someone to ensure that any error in our interpretation of the system under test can be corrected. The reasoning behind this is when we are testing a complex system we make assumptions as we test and the system may appear to confirm our assumptions and as such fuel what could be incorrect interpretations of the system. A computer system will never be able to inform you that your assumptions are wrong or right it could indicate a bias one way or another. The only way to repair errors in interpretations is to interact with a human being. This is the reasoning why debrief is very important so that any assumptions can be challenged and if necessary corrected.

As humans we are very good at being adaptive and changing our viewpoint and opinion when presented with new information but to do this effectively it needs to be a conversational setting, we are very bad at dealing with delayed feedback and the longer it is left the more likely we will keep our initial bias and interpretations.

The point of this rather short blog post is to explain why debrief after a testing session is important and that it needs to be done as soon as possible. Delays and excuses only cause more assumptions and incorrect information to appear to be the correct answer.

Make the time to debrief, plan for it and use it, it is crucial element of testing.