Wednesday, 30 October 2013

A quick way to start doing exploratory testing

Whilst following the tweets from Agile Testing Days using the hashtag #agiletd I came across the following quote made by Sami Söderblom - during his presentation of 'Flying Under the Radar'

"you should remove 'expected results' and 'steps' from test cases"

Others attending the presentation tweeted similar phrases.
Pascal Dufour @Pascal_Dufour#agiletd @pr0mille remove expected result in testcases. Now you have to investigate to get the expected result.
Anna Royzman @QA_nnaRemove Expected Result from 'test case' - to start thinking @pr0mille #agiletd
Dan Ashby @DanAshby04"you should remove 'expected results' and 'steps' from test cases" - by @pr0mille at #AgileTD - couldn't agree more!!!
Pawel Brodzinski @pawelbrodzinskiRemoving the expected result thus safety net from the equation enables creativity. Ppl won't look for compliancy anymore. @pr0mille #agiletd
This was a WOW moment for myself, I have struggled sometimes to get people to adopt exploratory testing with people struggling to create charters and making them flexible enough.  It may not be an ideal solution but for me it is way in to teams that may be deeply entrenched in test cases and test scripts.

Thanks Sami - another creative idea that I most certainly will put into use.

Wednesday, 16 October 2013

Blog Action Day - Human Rights

Today is Blog Action Day and the theme for this year is Human Rights.

More details of this great cause can be found here:

If you want to support this event and have a twitter account then use the hashtag #BAD2013 #humanrights twitter handle @blogactionday.  Please tweet to give awareness to this cause.

I came across the following Poster produced by Zen Pencils and thought it was such a fantastic example of the meaning of human rights.  I hope he does not mind but I have used the image on my blog here since it far surpasses anything I could ever have come up with.  Please visit his site and support him as an artist.

Please click on the link to see a high resolution version

Further reading on human rights and how you can be involved
Human Rights Day 10th December 2013

Tuesday, 15 October 2013

Are you ‘Checking’ or ‘Testing’ (Exploratory) Today?

Do you ask yourself this question before you carry out any test execution?

If not then this article is for you.  It starts with a brief introduction about the meaning of checking and testing in the context of exploration and then asks the reader to think about the question then evaluate the testing they are doing and determine from their answer if what they are doing has the most value.

There have been many discussions within the testing community about what the difference is between ‘checking’ and ‘testing’ and how it fits within the practice of test execution.

Michael Bolton started the debate with his article from 2009 on ‘testing vs checking’ in which he defined them as such:

  • Checking Is Confirmation
  • Testing Is Exploration and Learning
  • Checks Are Machine-Decidable; Tests Require Sapience

At that time there were some fairly strong debates on this subject and in the main I tended to agree with the distinctions Michael made between ‘checking’ and ‘testing’  and used this in my approach when testing.

James Bach, working with Michael, then came along with another article to refine the definitions in his article ‘Testing and Checking refined.

  • Testing is the process of evaluating a product by learning about it through experimentation, which includes to some degree: questioning, study, modelling, observation and inference.
  • (A test is an instance of testing.)
  • Checking is the process of making evaluations by applying algorithmic decision rules to specific observations of a product.
  • (A check is an instance of checking.)

From this they stated that there are three types of checking

  • Human checking is an attempted checking process wherein humans collect the observations and apply the rules without the mediation of tools.
  • Machine checking is a checking process wherein tools collect the observations and apply the rules without the mediation of humans.
  • Human/machine checking is an attempted checking process wherein both humans and tools interact to collect the observations and apply the rules.

The conclusion to me appeared to be that checking is a part of testing but we need to work out which would be best to use for the checking part is a machine better or a human?  This question leads to the reason for putting this article together.

James on his website produced a picture to aid visualisation of the concept:

© - James Bach - Testing and Checking refined

Since in my world checking forms a part of testing as I have interpreted the article by James we therefore need to pause for a moment and think about what we are really doing before performing any testing.

We need to ask ourselves this question:

Are we ‘checking’ or ‘testing’ (exploratory) today?

Since both form a part of testing and both could, depending on the context, be of value and importance to the project it is vital that we understand what type of testing we are doing.  If we ask ourselves that question and our answer is ‘checking’ we now need to find out what type of checking we are doing.  If the checking that is being performed falls under the category of machine checking then we need to think and find out why we are doing this as a human thinking person rather than getting a machine to perform it.  Things that could fall under this category could be validation or verification of requirements or functions in which we know what the answer will be.  If this is the case then you need to ask


  • are you doing this?
  • is a machine not doing this?
  • can a machine not do this for me?

The problem with a person carrying out manual machine checking is that is uses valuable testing time that could be used to discover or uncover information that we do not know or expect. * When people working in software development talk about test automation this is what I feel they are talking about.  As rightly stated by James and Michael there are many other automation tools that testers can use to aid their exploratory testing or human checking which could be classified as test automation.

So even though checking is a part of testing and can be useful, it may not be the best use of your time as a tester.  Sometimes there are various reasons why testers carry out machine checking such as:

  • Too expensive
  • Too difficult to automate
  • No time
  • Lack of skills
  • Lack of people
  • Lack of expertise. 

However if all you are doing during test execution is machine checking then what useful information are you missing out on finding? 

If we go back to the title of this article are you ‘checking’ or ‘testing’ today?

You need to make sure you ask yourself this question each time you test and evaluate which you feel would have most value to the project at that time.  We cannot continue to have the same ‘must run every test check manually’ mentality since this only addresses the stuff we know and takes no account of risk or priority of the information we have yet to discover.

The information that may be important or useful is hidden in the things we do not expect or currently do not know about. To find this we must explore the system and look for the interesting details that are yielded from this type of testing. (Exploratory)

I will leave you with the following for when you are next carrying out machine checking:

..without the ability to use context and expectations to “go beyond the information given,” we would be unintelligent in the same way that computers with superior compututional capacity are unintelligent.
J. S. Bruner (1957) Going beyond the information given.

Further reading

Friday, 11 October 2013

Believing in the Requirements

Traditionally in testing there has been a large amount of emphasis placed upon ‘testing’ ‘checking’ the requirements.  An article by Paul Holland on Functional specification blinders  and my currently reading of Thomas Gilovich excellent book on How we know what isn’t so has made me re-think this strategy from a psychological perspective. I feel Paul was on the right track with his suggestions of not using the requirements/specification to guide your creative test idea generation but looking at alternatives.  However even these alternatives could cause limitations in your thinking and creative ideas due to the way we think.
The problem we have is that once we have been presented with any information our inbuilt beliefs start to play their part and look at any information with a bias slant.  We at built to look for confirmations that match our beliefs in other words we look for things we want to believe in.  So if believe the implementation is poor or the system under test has been badly designed we will look for things that confirm this and provide evidence that what we believe is true.  We get a ‘buzz’ when we get a ‘yes’ that matches our beliefs.  The same could apply when looking through the requirements we start to find things that matches our beliefs and at the same time the requirements (especially if ambiguous) start to influence our beliefs so that we, as Paul discovered, only look for confirmations of what is being said.  Once we have enough information to satisfy our beliefs we then stop and feel that we have done enough.
The other side of this is that any information that goes against our beliefs makes us dig deeper and look for ways to discount the evidence that is against what we believe.  When faced with evidence that is against what we believe we want to find ways to discount this information and find flaws in it.  The issue is that if we are looking at requirements or specification then normally there is not much that goes against our initial beliefs due to the historic influence that these documents can have.  So we normally do not get to the stage of digging deeper into the meaning of these documents.
As Thomas Gilovich stated
People’s preferences influence not only the kind of information they consider, but also the amount they examine.
If we find enough evidence to support our views then normally we are satisfied and stop.  This limits our scope for testing and being creative. My thoughts on how to get around this apart from following the advice Paul gives is one of being self-critical and questioning oneself.
When we are in a confirming our beliefs mode we are internally asking ourselves the following question
 “Can I believe this?”
Alternatively when we find information that does not match or confirm our beliefs we internally ask ourselves the following question
“Must I believe this?”
These questions are taken from the book by Thomas Gilovich referenced earlier and in this Gilovich states
The evidence required for affirmative answers to these two questions are enormously different.
Gilovich mentions that this is a type of internally framing we do at a psychological level, after reading this it reminded me to go back and read the article by Michael Bolton on Test Framing in which I attended a tutorial at the Eurostar Test Conference . I noted within the article by Michael that there appeared, IMO, a lot of proving the persons beliefs rather than disproving.  In other words many of the examples were answering the “Can I believe this” question.  This is not wrong and is a vital part of testing and I use the methods described by Michael a great deal in my day to day work.  I wonder if this topic could be expanded a little by looking at the opposite and trying to disprove your beliefs, in other words asking the “Must I believe this?” questions.
So moving forward I believe that we can utilize our biases here to our advantage to become more creative in our test ideas.  To do this we need to look at ways to go against what we belief is right and think more negatively.  The next time you look at a requirements or specification document ask yourself the following:
And see where this leads you.

PS – this article is a double edged sword – if you read this article you should now be asking “Must I believe this?”