Showing posts with label beliefs. Show all posts
Showing posts with label beliefs. Show all posts

Friday, 6 November 2015

Testing Skills # 8 – Being a Skeptic

 “Science is the organized skepticism in the reliability of expert opinion.”  Richard Feynman
The reason for this article is based upon a recent twitter conversation regarding how someone sometimes avoids skepticism due to its negative nature.  I fully understand what they mean when skepticism is used in way to attack a person or their character (ad-hominem argument).  The purpose of this article is to provide information about how useful having a healthy dose of skepticism is for those involved in software testing.

Many people have a belief that the purpose of testing is to prove that the software works as expected.  This is really a fallacy since testing cannot prove that the software will always work the way you expect. There is always an element of doubt in proving that it works due to the nature of software and how it is used. This is the same as in the field of scientific research where someone comes up with a theory based upon their experiments and then their peers using a fair bit of skepticism try to prove the theory wrong.  This is the crux of any scientific based research method.  It is not about producing your theory it is about applying critical thinking to your theory as to why it can be wrong.   Testing software is similar in its approach it is very difficult to inform someone, who matters that the software is working. We can provide useful information about its behavior and what it actually doing.

As a tester you need to apply critical thinking to your testing and to the evidence you produce.   This is where being able to look at what you are doing and what information you find will a fair degree of skeptical thought.  The scientific method is a useful skeptic tool to apply to your testing and to any other information that people provide to you as ‘fact’.  One approach is the use of FiLCHers:


If you look at each of these headings you can see that it is about trying to find evidence where your theory and experiment could be wrong, acting as a skeptic.

This is a vital skill for testers to possess to ensure that their testing is unbiased and factually correct. If you feel the skeptic is lacking then look at ways you can improve it.

Here are some suggestions to help you become a better skeptic:



If you have some suggestions of your own to help others in their journey to being a great skeptic please let me know by using the comments section.


Friday, 11 October 2013

Believing in the Requirements

Traditionally in testing there has been a large amount of emphasis placed upon ‘testing’ ‘checking’ the requirements.  An article by Paul Holland on Functional specification blinders  and my currently reading of Thomas Gilovich excellent book on How we know what isn’t so has made me re-think this strategy from a psychological perspective. I feel Paul was on the right track with his suggestions of not using the requirements/specification to guide your creative test idea generation but looking at alternatives.  However even these alternatives could cause limitations in your thinking and creative ideas due to the way we think.
The problem we have is that once we have been presented with any information our inbuilt beliefs start to play their part and look at any information with a bias slant.  We at built to look for confirmations that match our beliefs in other words we look for things we want to believe in.  So if believe the implementation is poor or the system under test has been badly designed we will look for things that confirm this and provide evidence that what we believe is true.  We get a ‘buzz’ when we get a ‘yes’ that matches our beliefs.  The same could apply when looking through the requirements we start to find things that matches our beliefs and at the same time the requirements (especially if ambiguous) start to influence our beliefs so that we, as Paul discovered, only look for confirmations of what is being said.  Once we have enough information to satisfy our beliefs we then stop and feel that we have done enough.
The other side of this is that any information that goes against our beliefs makes us dig deeper and look for ways to discount the evidence that is against what we believe.  When faced with evidence that is against what we believe we want to find ways to discount this information and find flaws in it.  The issue is that if we are looking at requirements or specification then normally there is not much that goes against our initial beliefs due to the historic influence that these documents can have.  So we normally do not get to the stage of digging deeper into the meaning of these documents.
As Thomas Gilovich stated
People’s preferences influence not only the kind of information they consider, but also the amount they examine.
If we find enough evidence to support our views then normally we are satisfied and stop.  This limits our scope for testing and being creative. My thoughts on how to get around this apart from following the advice Paul gives is one of being self-critical and questioning oneself.
When we are in a confirming our beliefs mode we are internally asking ourselves the following question
 “Can I believe this?”
Alternatively when we find information that does not match or confirm our beliefs we internally ask ourselves the following question
“Must I believe this?”
These questions are taken from the book by Thomas Gilovich referenced earlier and in this Gilovich states
The evidence required for affirmative answers to these two questions are enormously different.
Gilovich mentions that this is a type of internally framing we do at a psychological level, after reading this it reminded me to go back and read the article by Michael Bolton on Test Framing in which I attended a tutorial at the Eurostar Test Conference . I noted within the article by Michael that there appeared, IMO, a lot of proving the persons beliefs rather than disproving.  In other words many of the examples were answering the “Can I believe this” question.  This is not wrong and is a vital part of testing and I use the methods described by Michael a great deal in my day to day work.  I wonder if this topic could be expanded a little by looking at the opposite and trying to disprove your beliefs, in other words asking the “Must I believe this?” questions.
So moving forward I believe that we can utilize our biases here to our advantage to become more creative in our test ideas.  To do this we need to look at ways to go against what we belief is right and think more negatively.  The next time you look at a requirements or specification document ask yourself the following:
“MUST I BELIEVE THIS”
And see where this leads you.

PS – this article is a double edged sword – if you read this article you should now be asking “Must I believe this?”