Friday, 11 October 2013

Believing in the Requirements

Traditionally in testing there has been a large amount of emphasis placed upon ‘testing’ ‘checking’ the requirements.  An article by Paul Holland on Functional specification blinders  and my currently reading of Thomas Gilovich excellent book on How we know what isn’t so has made me re-think this strategy from a psychological perspective. I feel Paul was on the right track with his suggestions of not using the requirements/specification to guide your creative test idea generation but looking at alternatives.  However even these alternatives could cause limitations in your thinking and creative ideas due to the way we think.
The problem we have is that once we have been presented with any information our inbuilt beliefs start to play their part and look at any information with a bias slant.  We at built to look for confirmations that match our beliefs in other words we look for things we want to believe in.  So if believe the implementation is poor or the system under test has been badly designed we will look for things that confirm this and provide evidence that what we believe is true.  We get a ‘buzz’ when we get a ‘yes’ that matches our beliefs.  The same could apply when looking through the requirements we start to find things that matches our beliefs and at the same time the requirements (especially if ambiguous) start to influence our beliefs so that we, as Paul discovered, only look for confirmations of what is being said.  Once we have enough information to satisfy our beliefs we then stop and feel that we have done enough.
The other side of this is that any information that goes against our beliefs makes us dig deeper and look for ways to discount the evidence that is against what we believe.  When faced with evidence that is against what we believe we want to find ways to discount this information and find flaws in it.  The issue is that if we are looking at requirements or specification then normally there is not much that goes against our initial beliefs due to the historic influence that these documents can have.  So we normally do not get to the stage of digging deeper into the meaning of these documents.
As Thomas Gilovich stated
People’s preferences influence not only the kind of information they consider, but also the amount they examine.
If we find enough evidence to support our views then normally we are satisfied and stop.  This limits our scope for testing and being creative. My thoughts on how to get around this apart from following the advice Paul gives is one of being self-critical and questioning oneself.
When we are in a confirming our beliefs mode we are internally asking ourselves the following question
 “Can I believe this?”
Alternatively when we find information that does not match or confirm our beliefs we internally ask ourselves the following question
“Must I believe this?”
These questions are taken from the book by Thomas Gilovich referenced earlier and in this Gilovich states
The evidence required for affirmative answers to these two questions are enormously different.
Gilovich mentions that this is a type of internally framing we do at a psychological level, after reading this it reminded me to go back and read the article by Michael Bolton on Test Framing in which I attended a tutorial at the Eurostar Test Conference . I noted within the article by Michael that there appeared, IMO, a lot of proving the persons beliefs rather than disproving.  In other words many of the examples were answering the “Can I believe this” question.  This is not wrong and is a vital part of testing and I use the methods described by Michael a great deal in my day to day work.  I wonder if this topic could be expanded a little by looking at the opposite and trying to disprove your beliefs, in other words asking the “Must I believe this?” questions.
So moving forward I believe that we can utilize our biases here to our advantage to become more creative in our test ideas.  To do this we need to look at ways to go against what we belief is right and think more negatively.  The next time you look at a requirements or specification document ask yourself the following:
“MUST I BELIEVE THIS”
And see where this leads you.

PS – this article is a double edged sword – if you read this article you should now be asking “Must I believe this?”

1 comment:

  1. When I first read this article, I thought that the difference between "Can I believe this" and "Must I believe this" was the difference between asking "Can I use this information to hold up my beliefs" and "Should I throw this information out in order to continue holding up my beliefs", but after reading Michael's article and reviewing this again, I see now that the argument you're making is that we should draw a distinction between "Is it possible to believe this?" and "Is it impossible to believe otherwise?".

    I can see a relationship between this idea and the everyday question we ask ourselves during test about whether a resolution can be considered complete. There's the "Does it do what we think it should" question that is often easy to answer, an then the "Does it not do what we think it shouldn't" question that involves thinking about acceptable failure cases. Really, there are lots of spaces in testing where its helpful think about the negative space around testing a feature or defect.

    ReplyDelete