Wednesday, 29 January 2014

Using games to aid tester creativity

Recently Claire Moss  blogged about potty training and how this came about from a card game called Disruptus  I introduced to the Atlanta Testing meet up while I was in the USA.  This reminded me that I was going to blog about how I use this tool in a workshop and in my day to day testing to improve upon my own and teams testing ideas.  The workshop is a creative and critical thinking and testing workshop which I intend to deliver at the London Tester Gathering in Oct 2014 – early bird tickets available. 

The workshop is based upon a series of articles that I have written on creative and critical thinking part 1 here.  As part of the workshop I talk about using tactile tools to aid your creative thoughts, having objects you can hold and manipulate have been shown to improve creativity (Kinesthetic learning).  One part of the workshop introduces the game of Disruptus, which has very simple rules. You have about 100 flash cards which have drawings or photographs on and you choose a card at random. They even include some spare blank cards for you to create your own flash cards. An example of some of the cards can be seen below:

You then have a selection of action cards which have the following on them:
    • Make it better: Add or change 1 or more elements depicted on the card to improve the object or idea
    • EXAMPLE From 1 card depicting a paperclip: Make it out of a material that has memory so the paperclip doesn’t distort from use.
    • Use the object or idea on the card for a different purpose.
    •  EXAMPLE From 1 card depicting a high heel shoe: Hammer the toe of the shoe to a door at eye level and use the heel as the knocker.
    • Look at the picture, grasp what the purpose is, and come up with a completely different way to achieve the same purpose.
    •  EXAMPLE From 1 card depicting a camera: Wear special contact lenses that photograph images with a wink of the eye.
  • CREATE 2
    •  Using 2 cards take any number of elements from each card and use these to create a new object or idea.
For the purpose of this article I will only be looking at the first three.  You can either choose which action card you wish to use or use the dice that is provided with the game. The rules are simple you talk about how you have changed the original image(s) in accordance with the action card and a judge decides which is the best to decide the winner.  When I do this we do not have winners we just discuss the great ideas that people come up with, to encourage creativity there are no bad ideas.

The next step in the workshop is applying this to testing. Within testing there are still a great many people producing and writing test cases which are essentially checks. I am not going to enter into the checking vs testing debate here, however this game can be used if you are struggling to move beyond your ‘checks’ and repeating the same thing each time you run your regression suite. It can be used to provide ideas to extend your ‘checks’ into exploratory tests.  

Let us take a standard test case:
Test Case:  Login into application using valid username/passwordExpected result:  Login successful, Application screen is shown.
Now let us go through each of the action cards and see what ideas we can come up with to extend this into an exploratory testing session

  •  IMPROVE - Make it better: (Add or change 1 or more elements depicted on the card to improve the object or idea.)

Using the action described above can you think of new ways to test by taking one element from the test case?

Thinking quickly for 1 minute I came up with the following:
    • How we do start the application?  Is there many ways?  URL?  Different browsers? Different OS?
    • Is the login screen good enough or can it be improved (disability issues/accessibility)
    • What are valid username characters?
    • What are valid password characters?
    • Is there a help option to know what valid username/passwords are?
    • Are there security issues when entering username/password?
Can you think of more?  This is just from just stepping back for minute and allowing creative thoughts to appear.  (Remember there are no bad ideas)

Let us now look at another of the action cards.
  • TRANSFORM - Use the object or idea on the card for a different purpose.
What ways can you think of from the example test case above to transform the test case into an exploratory testing session?

Again we could look at investigating:
    • What alternatives are there to logging in to application? Fingerprint, Secure token, encrypted key?
    • Can we improve the security of the login code?
    • What security issues can you see with the login and how can you offer improvements to prevent these issues
It takes very little time to come up with many more ways in which you can transform the test case into something more than a ‘check’

Now for the next (and final for the purpose of this article):
  • DISRUPT - Look at the picture, grasp what the purpose is, and come up with a completely different way to achieve the same purpose.
I may have already touched upon some of the ideas on how to disrupt in the previous two examples, that is not a bad thing since if an idea appears in more than one area it could be an indication of an idea that may very well be worth pursuing.

Some ideas on disrupting could be:
    • Do we need a login for this? 
    • Is it being audited?
    • Is it an internal application with no access to the public?
I hope from this article you can see how such a simple game can help to improve your mental ability and testing skills, as Claire mentioned in her article.
Since software testing is a complex mental activity, exercising our minds is an important part of improving our work.
This is just a small part of the workshop and I hope you have enjoyed the article, if so I hope to see some of you soon when I run the full workshop. 

PS – I intend to run a cut down version of the workshop for the next Atlanta Testing Meet Up whilst I am here in the USA.  Keep a watch here for announcements in the near future.

Monday, 27 January 2014

Measuring Exploratory Testing

A quick post on a concept we are working on within our company

One of the difficulties I have found with implementing Exploratory Testing is a way to measure how much testing you have done at a high level (stakeholders).  This article looks at this problem and tries to provide a solution, it should be noted that there are good ways currently of reporting automation (checking) and for this article that will be out of scope.

The current way of we manage exploratory testing is by using time boxed sessions (session based test management) and for reporting at a project level we can (and do) use dashboards.  This leaves the question open of how much testing (exploratory) has been done against the possible amount of testing time available.

After having discussions with some work colleagues, we came up with the following concept (this was a great joint collaboration effort,I cannot claim the ideas as just mine).  The basic concept of session based test management is that you time box your exploration (charters) in to sessions where one session equates to one charter (if you have not come across the terminology of charters then refer to the session based test management link) To simplify we use an estimation that one session is half a day (sometime you do more, sometimes less), therefore we now have a crude way to estimate the possible number of charters you could run in a period of time.

For example if you have a sprint/iteration of two weeks you have per person a possible number of sessions you could run of 20 sessions, if you have 5 testers then you could have a total possible number of sessions of 5 * 20 = 100 sessions within your sprint.  No one on a project would be utilized like this for 100% of the time so the concept that we came up with is that for your project you set a target of how much of your time you want your team to be doing exploratory testing.  The suggestion is to begin with to set this to a value such as 25%, with the aim to increase this as your team moves more and more into automation for the checking and exploratory for the testing, the goal being a 50/50 split between checking and testing.

Using the example above we can now define a rough metric to see if we are meeting our target (limited by time)

If we have 2 weeks and 5 testers and a target of 25% exploratory we would expect by the end of the two weeks if we are meeting our target to have done: 25 exploratory sessions.

We can use this to report at a high level if we are meeting our targets within the concept of exploring, within a dashboard as shown below,

Possible sessions100
% Target Sessions25%
Number of actual sessions25
% Actual Target25%
Following this format we can this using colours to indicate if we are above or below our target: (red/green)

Possible sessions100
% Target Sessions25%
Number of actual sessions15
% Actual Target15%
We feel this would be a useful indication of the amount of time available and the amount of time actual spent doing exploratory testing rather than checking (manually or automated)

There are some caveats that go with using this type of measurement.

Within session based test management the tester reports roughly the amount of time they spend:
  • Testing
  • Reporting
  • Environment set-up
  • Data set-up
This is reported as a percentage of the total time in a session, therefore more detailed reporting can be done within a session but we feel this information would be of use at a project level rather than a stakeholder level.  This is something that, if it would be of use to stakeholders we could revisit and come back to.

Your thoughts on this concept would be most welcome and we see this as a starting point for a discussion that hopefully will provide a useful way at a high level to report how much time we are spent testing compared to checking.

We am not saying this will work for everyone but for us it is ideal way of saying to stakeholders that of all the possible time we could have spent testing (exploratory), this is the amount of time we did spend and the associated risks that may have.