Showing posts with label bias. Show all posts
Showing posts with label bias. Show all posts

Monday, 24 April 2017

Fake News!



It has been a long time since my last post and I apologize, there have been many factors as to why this has been the case some to do with work load and others to do with my health.  Suffice to say I may not publish as often but I still want to get out into the public domain information that others may found useful.

The topic for this post is to do with the current wave of 'fake news' articles and how this may have influenced people and persuaded them to make decisions different from what they may have made if they had not seen such articles.

The concern is that there appears to be little critical thinking done around these stories by those who read them or watch them on the 'news' channels. These articles appeal to peoples biases either via 'confirmation bias'  (What is confirmation bias? - https://www.psychologytoday.com/blog/science-choice/201504/what-is-confirmation-bias) or appeal to authority.  (What is appeal to authority - https://yourlogicalfallacyis.com/appeal-to-authority)  Humans are easily duped or mislead and it requires conscious effort to overcome these fallacies and others.  There are many fallacies that we fall prey to and the following has a good list and description of them - http://www.webpages.uidaho.edu/eng207-td/Logic%20and%20Analysis/most_common_logical_fallacies.htm.

There are many guidelines and techniques that can be used to overcome such fallacies and clarify what is truthful or not.

One easy method is to look at the source of the information.

  • Is the source reliable?  
  • Does it come from multiple sources?  
  • Does the source have an undisclosed agenda? 
  • Can the information be verified independently?
Critically analyzing the information presented can help you make better judgement on  what is being said.

The following are a couple of techniques for critical thinking that I came across and are included in my book The Psychology of Software Testing

The 5 W's and H

Another technique, often used in journalism, is the five W's and one H.  The five W's are Who, What, When, Where and Why, The H is How.

The five Ws and one H have been immortalized in the poem 'I Keep Six Honest Serving Men' by Rudyard Kipling.
I KEEP six honest serving-men
They taught me all I knew; 
Their names are What and Why and When And How and Where and Who.
I send them over land and sea,
I send them east and west;
But after they have worked for me,I give them all a rest.    
I let them rest from nine till five,
For I am busy then,As well as breakfast, lunch, and tea,
For they are hungry men.
But different folk have different views; know a person small
She keeps ten million serving-men,
Who get no rest at all!  
She sends'em abroad on her own affairs,
From the second she opens her eyes
One million Hows, two million Wheres,And seven million Whys!    

The five W's and One H are a series of questions used to get the complete story, hence its use in journalism.  

Reporting on the 3 Little pigs story
  • Who was involved? 
    • The three little pigs (the first pig, the second pig and the third pig) and The Big Bad Wolf (a.k.a. Wolf).
  • What happened?
    •  Each pig constructed a house out of different materials (straw, sticks and bricks). Wolf (allegedly) threatened to blow over their houses and is believed to have destroyed both the straw and stick homes at this time. Pig one and two were able to flee to the brick house, where they remain at the moment. We’re still waiting to hear from local authorities, but it looks like the Wolf may have been injured while attempting to enter the brick house.
  • Where did it take place?
    • Outside a straw house, a stick house and a brick house.
  • When did it take place? 
    • At various times throughout the day.
  • Why did it happen? 
    • Apparently the Big Bad Wolf was trying to eat the pigs. Several eyewitnesses recall the Wolf taunting the pigs before he destroyed the straw and stick homes by chanting, “Little pigs, little pigs, let me in.” The pigs apparently scoffed at the Wolf’s idle treats, saying “Not by the hair of our chinny, chin chins.” It’s believed this angered the Wolf and led to him blowing the houses down.
  • How did it happen? 
    • It would appear the first two homes were not built to withstand the Wolf’s powerful breath. The incident inside the brick house is still being investigated, but early indications suggest the Wolf fell into a boiling pot of water when trying to enter the house through the chimney.


If you read any articles and it does not appear to follow this journalistic technique would be cause for concern as to its truthfulness.  You may want to delve deeper and see if the article is accurate and independent in its reporting.  

16 Steps to become a critical thinker

The following set of steps are based upon the article 'Intro to Logic: Techniques of Critical Thinking'.  
It is a useful critical thinking exercise to examine each of these steps and rewrite them to form your own set of steps to enable critical thinking.
  • Clarify
    • Ask questions to clarify what is being said.  Simplify to aid clarity.
  • Be accurate.
    • Facts can only be in the past.  Is anything in the statement making future predictions?  If so this is not fact.  Are the facts correct? Is there any factual evidence to back up the statement?
  • Be precise.
    • Make sure what is being said is accurate, try to avoid ambiguity.
  • Be relevant.
    • Make sure to stick to the issue under discussion, avoid falling for strawman or other fallacies.
  • Know your purpose.
    • Figure out what the most important thing is in the discussion.  Try to remove any related but not relevant information (see 'be relevant' above).
  • Identify assumptions.
    • When involved in critical thinking it is important to be aware that all thinking is based upon some level of assumption. Try to identify these assumptions.
  • Check your emotions.
    • Emotion can directly affect our critical thinking.  Try to keep emotions under control when discussing issues.  Ask yourself are my emotions influencing my judgement?
  • Empathize.
    • Look at what a person is saying from their viewpoint.  Try to put yourself in their shoes, how would you feel if someone spoke to you in the way in which you are speaking to them.
  • Know your own ignorance.
    • Know your level of knowledge. You do not know everything and what you do know may be wrong. Be gracious when someone proves you wrong, learn from being wrong.
  • Be independent.
    • Do not follow the crowd.  Verify information with dependent thought.  Do your own research to verify what is being claimed. 
    • Laurent Bossavit has a wonderful book on this subject called The Leprechauns of Software Engineering
  • Think through implications.
    • Look at what is being claimed and see what the implications of this claim could be.  Look for alternative implications, both negative and positive.
  • Know your own biases.
    • Being aware of your biases is crucial when involved in critical thinking.  How are they affecting your judgement? Are they affecting your judgement of others?
  • Suspend judgement.
    • Do not arrive at a conclusion and then try to find reasons that support your conclusion.  Use the scientific method as discussed earlier in this chapter. Form a theory on how it should work and then attempt to find ways to disprove your theory. 
  • Consider the opposition
    • Look for alternative and opposite perspectives.   Do not base your conclusion from one source.  Look for sources that disagree with the first source.
  • Recognize cultural assumptions.
    • Be conscious of stereotyping and cultural bias.  It does not mean if someone is from a different culture or period in time that their views are any less greater than your own.
  • Be fair, not selfish.
    • We are naturally selfish creatures and find it hard to be wrong and admit our mistakes.  Be fair with yourself and others, look for selfish traits in yourself and others.

Some links:

Speaking events.

I am due to speak at a couple of events this year.


Friday, 11 October 2013

Believing in the Requirements

Traditionally in testing there has been a large amount of emphasis placed upon ‘testing’ ‘checking’ the requirements.  An article by Paul Holland on Functional specification blinders  and my currently reading of Thomas Gilovich excellent book on How we know what isn’t so has made me re-think this strategy from a psychological perspective. I feel Paul was on the right track with his suggestions of not using the requirements/specification to guide your creative test idea generation but looking at alternatives.  However even these alternatives could cause limitations in your thinking and creative ideas due to the way we think.
The problem we have is that once we have been presented with any information our inbuilt beliefs start to play their part and look at any information with a bias slant.  We at built to look for confirmations that match our beliefs in other words we look for things we want to believe in.  So if believe the implementation is poor or the system under test has been badly designed we will look for things that confirm this and provide evidence that what we believe is true.  We get a ‘buzz’ when we get a ‘yes’ that matches our beliefs.  The same could apply when looking through the requirements we start to find things that matches our beliefs and at the same time the requirements (especially if ambiguous) start to influence our beliefs so that we, as Paul discovered, only look for confirmations of what is being said.  Once we have enough information to satisfy our beliefs we then stop and feel that we have done enough.
The other side of this is that any information that goes against our beliefs makes us dig deeper and look for ways to discount the evidence that is against what we believe.  When faced with evidence that is against what we believe we want to find ways to discount this information and find flaws in it.  The issue is that if we are looking at requirements or specification then normally there is not much that goes against our initial beliefs due to the historic influence that these documents can have.  So we normally do not get to the stage of digging deeper into the meaning of these documents.
As Thomas Gilovich stated
People’s preferences influence not only the kind of information they consider, but also the amount they examine.
If we find enough evidence to support our views then normally we are satisfied and stop.  This limits our scope for testing and being creative. My thoughts on how to get around this apart from following the advice Paul gives is one of being self-critical and questioning oneself.
When we are in a confirming our beliefs mode we are internally asking ourselves the following question
 “Can I believe this?”
Alternatively when we find information that does not match or confirm our beliefs we internally ask ourselves the following question
“Must I believe this?”
These questions are taken from the book by Thomas Gilovich referenced earlier and in this Gilovich states
The evidence required for affirmative answers to these two questions are enormously different.
Gilovich mentions that this is a type of internally framing we do at a psychological level, after reading this it reminded me to go back and read the article by Michael Bolton on Test Framing in which I attended a tutorial at the Eurostar Test Conference . I noted within the article by Michael that there appeared, IMO, a lot of proving the persons beliefs rather than disproving.  In other words many of the examples were answering the “Can I believe this” question.  This is not wrong and is a vital part of testing and I use the methods described by Michael a great deal in my day to day work.  I wonder if this topic could be expanded a little by looking at the opposite and trying to disprove your beliefs, in other words asking the “Must I believe this?” questions.
So moving forward I believe that we can utilize our biases here to our advantage to become more creative in our test ideas.  To do this we need to look at ways to go against what we belief is right and think more negatively.  The next time you look at a requirements or specification document ask yourself the following:
“MUST I BELIEVE THIS”
And see where this leads you.

PS – this article is a double edged sword – if you read this article you should now be asking “Must I believe this?”

Thursday, 9 August 2012

Testing RESPECT


Whilst researching for a recent blog article on science v manufacturing and testing I came across an interesting article about scientific standards called the RESPECT code of practice and I made a mental note to come back to this since I thought it could have some relevance to testing. The article can be found here and a PDF version of the code can be located here:

The purpose of this article is to look at each of the statements made about what socio-economic researchers should endeavour to and my thoughts on how it may apply to testing.

The first paragraph is the one that drew me to the article in the first instance.
Researchers have a responsibility to take account of all relevant evidence and present it without omission, misrepresentation or deception.
It is so interesting how this is closely related to the responsibility of the tester when carrying out testing.  We have a duty to ensure that ethnically and morally we provide a service that meets these responsibilities. 

The bit that stood out  within the main body of text was the following statement
does not predetermine an outcome
I still find that within the field of testing there are still people writing scripted tests in which they try to predict the outcomes before they actual experience using the application.  This is not testing, testing is exploring the unknown, asking questions and seeing if there is a problem.

Now if we look at the last line of the paragraph
Data and information must not knowingly be fabricated, or manipulated in a way that might lead to distortion
Hmmm? Anyone want to start a discussion on testing metrics?  Cem Kaner talks about validity of metrics here

Then the article gets into the reporting of findings.
Integrity requires researchers to strive to ensure that research findings …. truthfully, accurately and comprehensively…have a duty to communicate their results in as clear a manner as possible.
I get tired of seeing time and time again shoddy or poorly documented testing findings/bug reports.  In my world exploratory testing is not an excuse for poor reporting of what you did and what you found

The most exciting part of the article was the final paragraph in which they realise that as human beings we are fallible.
…no researcher can approach a subject entirely without preconceptions 
It is therefore also the responsibility of researchers to balance the need for rigour and validity with a reflexive awareness of the impact of their own personal values on the research
It is something within this blog that I talk about a lot the need to understand that we have our own goals and views which could impact and influence our testing.  We owe it to ourselves to try and be aware of these sometimes irrational and emotional biases. 

The following is my attempt to go through each of the statements made in the article and provide my own personal view (with bias) or some external links in which others within the testing community have already discussed.

a) ensure factual accuracy and avoid misrepresentation, fabrication, suppression or misinterpretation of data

See previous link to article by Cem Kaner on metrics, Also by Michael Bolton here  and here by Kaner and Bond

b) take account of the work of colleagues, including research that challenges their own results, and acknowledge fully any debts to previous research as a source of knowledge, data, concepts and methodology

In other words if you use other peoples articles, ideas etc give them some credit.

c) critically question authorities and assumptions to make sure that the selection and formulation of research questions, and the conceptualisation or design of research undertakings, do not predetermine an outcome, and do not exclude unwanted findings from the outset

STOP accepting that because it has always been done this way then that means it is right.

d) ensure the use of appropriate methodologies and the availability of the appropriate skills and qualifications in the research team

Interesting one, I do not take this as meaning to get certified, other people  may.  I take it that we have a responsibility to ensure that everyone we work with has the relevant skills and if they do not mentor them and support them to obtain these skills.  Encourage self-learning and look at all the available approaches you can use for testing and select the one most suitable for you.

e) demonstrate an awareness of the limitations of the research, including the ways in which the characteristics or values of the researchers may have influenced the research process and outcomes, and report fully on any methodologies used and results obtained (for instance when reporting survey results, mentioning the date, the sample size, the number of non-responses and the probability of error

In other words be aware of both your own limits and project limits such as time, money or risk.  Testing is an infinite task so when reporting make sure it is clear that your sample of ‘tests’ are very small in comparison of all the possible ‘tests’ you could do.

f) declare any conflict of interest that may arise in the research funding or design, or in the scientific evaluation of proposals or peer review of colleagues’ work

Does this apply to testing?  If you are selling a tool or a certification training scheme then this should be stated clearly on any material you publish regarding testing.

g) report their qualifications and competences accurately and truthfully to contractors and other interested parties, declare the limitations of their own knowledge and experience when invited to review, referee or evaluate the work of colleagues, and avoid taking on work they are not qualified to carry out

To me if you stop learning about testing and act like one of the testing dead (see article by Ben Kelly – here) then you are not qualified to carry out testing.

h) ensure methodology and findings are open for discussion and full peer review

Do not hide your testing effort inside a closed system in which only the privileged few have access.  Make sure all your testing effort is visible to all within your company (use wikis)

i) ensure that research findings are reported by themselves, the contractor or the funding agency truthfully, accurately, comprehensively and without distortion. In order to avoid misinterpretation of findings and misunderstandings, researchers have a duty to seek the greatest possible clarity of language when imparting research results
  
In other words make sure that what you have done when testing is what you report and that you report clearly and without ambiguous facts

j) ensure that research results are disseminated responsibly and in language that is appropriate and accessible to the target groups for whom the research results are relevant

Make sure that all relevant parties have access to your findings, communicate, talk, discuss.  As stated earlier do not hide your findings publish them for all to see warts and all.

k) avoid professional behaviour likely to bring the socio-economic research community into disrepute

We all have a duty as testers to be professional in our behaviour and this means even when we disagree we still need to respect each other’s view and be able to participate in a debate without making others feel inferior.

l) ensure fair and open recruitment and promotion, equality of opportunity and appropriate working conditions for research assistants whom they manage, including interns/stagiaires and research students

Employers and recruitment agencies STOP using multi-choice certification schemes as a filter for working in testing.  Holding one of these certificates do not mean that you can test.

m) honour their contractual obligations to funders and employers

This is a given no comment needed on this.

n) declare the source of funding in any communications about the research.

If what you are publishing is in your own self-interest or a vested interest in which you can receive funds then please be honest and up front about this.  As professionals we can then make an informed decision about the content.

The context driven testing school has a list of principles here and it is interesting to compare the two there appears to be some overlap but maybe we could improve the context driven one by using more of the RESPECT code of practice.  What do others think?  A good starting point maybe?

Friday, 4 May 2012

Great Expectations

I recently spent some time running exploratory testing workshops in India and found I had some free time to start to reduce the mountain of books I have on my Kindle. I managed to read two books by Dan Ariely

Predictably Irrational, Revised andExpanded Edition: The Hidden Forces That Shape Our Decisions

The Upside of Irrationality: The Unexpected Benefits of Defying Logic at Work and at Home


Within these books there are some great insights into how we think we behave and how we actually behave, Dan calls the work he does behavioural economics.  There are many interesting article he talks about in his books and most of them I can relate to software testing.  The one I want to pick up on for this blog article is how we can be easily influenced into following a certain path and making us act in a predictably way by manipulating our expectations.

The worlds of advertising and marketing have very clever ways in manipulating us into buying their products.  One of the ways they can do this is to ‘Prime’ us, by doing this they make us think of a subject or a product so that we unconsciously act in a way that makes us want that product and only that product.

For example if right now I asked you to come with words that are associated with being elderly, what words would you come up with?  If for the next 10 minutes I said think about this and you came up with a list of positive and negative words for elderly.  After you have done this I then ask you to perform some tasks.  When you perform this tasks you will be slower, take more time and notice little aches and pains you have.  All this is from just thinking about the term elderly.   The use of association has a very powerful effect on our unconsciousness.  Taking this a stage further if you have been primed your expectations have been manipulated so that you tend to have a bias towards the initial priming.  For example if you are told beforehand that a certain type of coffee is unique, expensive, has a secret ingredient and tastes wonderful.  You will at some point have to try it and once you do because of all the priming you have to like it (If you like coffee that is – replace coffee with chocolate, beer or whatever your favourite thing is) your expectations have forced you to enjoy it.  Even if your taste buds are saying it tastes vile, if you have paid a lot of money for it and have been told many times how wonderful it is it.  You will tell yourself it is wonderful and amazing.  Priming your conscious is a powerful bias that can override many other indicators.

Now what if I tell you that the secret ingredient is elephant dung now you have this knowledge your mind will be changed, what if I told you this before you decided to buy the product, would you still buy the product?

So how does all of this relate to testing?

Imagine if all you are doing when ‘testing’ is validating the requirements, your expectations have been primed and managed.  If you keep hearing that the software is bad by the development teams, that the model being used is poor. Then these all force you to be primed and you automatically assume the product is poor and that the requirements are what we should expect the product to match.  Can you see how dangerous this would be?  You are priming yourself to only confirm what the requirements are or what people are saying. 

One of the ways to help resolve this is by using an exploratory testing approach, which can help to reduce your expectations and assumptions of the product under test.  It tries to achieve this by the use of models, oracles, and heuristics to ensure that your beliefs, biases and expectations are constantly being challenged as you are testing. 

Michael Bolton at Developsense has recently wrote some articles on oracles and heuristics on his blog page.

Friday, 23 March 2012

More randomness

I have just finished reading the excellent book ‘The Drunkard's Walk: How Randomness Rules Our Lives by Leonard Mlodinow and found lots of useful bits of information that could relate to what we could experience when testing. The premise of the book (spoiler alert ----) is that randomness affects our lives all of the time. It talks about why do some people become very successful when others who have similar talents are not as successful? It explains about our natural ability to form patterns where no pattern exists. It gives examples of how we form mental relationships between independent events where there is no relationship (better known as regression towards the mean) It is a really interesting book and ties in with my previous posts on how our cognitive biases can easily fool us and the connection to testing.

One of the best lessons I got from the book was when we have formed a theory how do we prove that theory to be correct or not.

The example used is as follows (see how well you do)

“Suppose I tell you that I have made up a rule for the construction of a sequence of three numbers and that the sequence 2, 4, 6 satisfies my rule. Can you guess the rule? A single set of three numbers is not a lot to go on, so let’s pretend that if you present me with other sequences of three numbers, I will tell you whether or not they satisfy my rule. Please take a moment to think up some three-number sequences to test


Now that you have pondered your strategy, I can say that if you are like most people, the sequences you present will look something like 4, 6, 8 or 8, 10, 12 or 20, 24, 30. Yes, those sequences obey my rule. So what’s the rule? Most people, after presenting a handful of such test cases, will grow confident and conclude that the rule is that the sequence must consist of increasing even numbers. But actually my rule was simply that the series must consist of increasing numbers. The sequence 1, 2, 3, for example, would have fit; there was no need for the numbers to be even. Would the sequences you thought of have revealed this?”

Did you note that the author used the term “test case” – these are like little test cases to prove a theory or idea you have. The author talks in great depth about why people get this wrong most of the time. The reasoning being that once we form an idea or theory we search for ways to prove our idea is correct rather than prove the idea is wrong. There are many more ways to prove a theory is wrong rather than it is right. This is called confirmation bias and I talked about this in my blog here

Does this sound familiar to what we do in testing? We do so much to prove the requirements are correct when really we should be trying to prove that the requirements are wrong. To me within the world of software testing there is a great deal of trying to confirm (validate) what we already know about the software rather than try to ‘test’ what we do not know. Having read this book I can see why it is so easy to fall into this trap, in most cases our natural instinct is to look for positive ways to prove our theory correct, rather than try and disprove them.

When we test it appears as if we are fighting our natural instincts and we can get feelings of uneasiness and hence why some people may appear to struggle to adopt a more exploratory testing approach and find it difficult to move from a confirmation style of testing (checking) towards a more asking questions in which I do not know the answer style of testing. This feeling is commonly known as Cognitive Dissonance ( I blogged about this here.

If we can understand that we as testers should feel uneasy and that it is part of our remit to fight our natural instincts we can then use it as a tool when testing to improve how we test.

The example used in the book is in my opinion a great example of what testing is and is one of the take-a-ways from the book.

Thursday, 23 February 2012

Patterns from Nothing

Continuing on from my previous post on Cognitive Illusions I thought I would start with the ability we have as humans to be fooled into seeing patterns from nothing. It is common for people to find shapes or objects when starting at the clouds or to think that there is a pattern of luck associated with the game we are playing. We can look at a random set of data and without doubt we will naturally make a pattern. Ben Goldache talks about this in his book Bad Science[1] It is in our nature and we are over sensitive to making patterns when none exist. Look at the following example of tosses of a coin, H-Heads, T-Tails.

HHHHHHHHHHHTHHHHHHHHHHHH

Now what conclusion would you make from this set of results?

Have you come up with any?

If you have come up with a conclusion that is your natural instinct and intuition to create a pattern and a cognitive illusion. Given that the coin is true the chances that the sequence above would happen is the same as any other sequence. Take this one step further and I ask you to say what the result will be on the next coin toss.

What would you answer?

Why would this be your answer?

Using statistics the possibility of it being H or T is 50/50 or equal chance. This is the reason casinos make so much money they know we are all fallible and use that against us. We make the mistake that there is a pattern and that our luck must change. I am sad to inform you but there is no luck the chances are still the same and within a casino the odds will always be against you.

The same can be applied to those who follow sport and come across the phrase that someone is on a lucky streak; this again is our natural bias to create a pattern when none exist. For example a soccer player has the follow goal scoring record. (X means scored in the game – O – means did not score.

XXXXXOOXXXXXXXXXXOOOOOOOO

Our tendency to create a pattern means that we will take that data and say the player has had 2 lucky streaks of scoring and is currently having a dip in form. With such simple data it is so easy to create and formulate assumptions and make patterns where there is no pattern. This is especially easy to do if there is no context. The simple example above proves the need to have some context. If I gave some more information that the player above has for the last ten games been playing in the senior side instead of the juniors, would that make a difference to your conclusion?

So how does this apply to testing?

There is a talk within testing that we should trust our intuition (I am one of these people to talk about this) and go with our gut feelings. Malcolm Gladwell in his book Blink [2] describes this to great effect. However we need to be aware that our intuition can try and fool us and try to create patterns when we are carrying out our testing. The problems come when we start to see these patterns and this causes us to miss other information that may be important.

For an example of this watch the following video (Information provided by Gordon Pitz [3] )



Have you watched the video?

No?

Please go and watch it, it will help you understand the rest of this article.







Did you see the Gorilla? [4]

No?

This might be due to being distracted and focused on a task. Noticing patterns and forming inconclusive assumptions when there are none can cause the same effects and as such it does show the point that our minds can be easily distracted and miss important information. It is important when we are testing that we do not spend too much of our time looking and investigating patterns since out natural instinct is to see patterns we could end up missing a lot more important information.

This is vital when we are testing using the exploratory testing approach where it is very easy to go off track and away from our mission to investigate what we think is a pattern of behaviour within the system under test. It is best in these situations to make a note of it and continue on track.

Sometimes it is difficult to go against what is natural and some find it near impossible and this could be one of the reasons why the exploratory testing approach may not be suitable for them or they find it too difficult. I hope that this article will encourage those who have struggled to have another go knowing that sometimes that could be fighting against their own instincts and as such making it appear more difficult for them.

So are there are techniques that can be used to help resolve this bias?

The problem is that since this is a natural built in instinct, and because we are aware of it, it does not necessarily mean we can resolve it.

“Knowing that it exists does not remove it”
Gordon Pitz [3]

There are few techniques that could help

One previous described when using session based testing is to keep to your mission and make a note of interesting patterns that you think are emerging. Later when you do a feedback session to others explain your thoughts about the pattern and see if others see the same pattern. If they do not it could be a case that you see a pattern when there is none
.
Another way which may help to prevent this bias is to use paired testing, there is gathering evidence that social facilitation [5] can help to reduce cognitive bias and paired testing is one way to make use of social facilitation. We seem to be more attentive and aware when we are being observed. It should be used with caution since if the task is complex and difficult people will perform badly. So this can only really be used when the task is not over complex.

One more technique that I have found invaluable is the use of testing framing as mentioned by Michael Bolton [6]. I attended a course on this and I do recommend that people read the article on his website. Using this approach helps the tester to focus on the purpose of the test but it also has a cool side effect that it can help to remove this bias to see patterns when there are none. It works especially well when you have to justify your reasoning.

The next article will look at the cognitive illusion of regression to the mean and its possible impact on testing.

References:

[3] The Deceptive Nature of Intuition – Gordon Pitz- http://www.unc.edu/~gpitz/pdf/Chabris-Simons%20review.pdf
[4] The invisible Gorilla - Christopher Chabris and Daniel Simons - http://www.amazon.com/Invisible-Gorilla-How-Intuitions-Deceive/dp/0307459667
[6] Test framing – Michael Bolton - http://www.developsense.com/blog/2011/05/ive-been-framed/








Wednesday, 8 February 2012

Cognitive Illusions

or how your mind plays tricks on you.

People who regularly read my blog may be aware that I have a keen interest in psychology and how it can relate to testing. If you have not read my blog before wow welcome first timer I hope you enjoy and come back for more articles in the future.

I have in the past written a few articles about bias (here, here, here and here) and how it can be dangerous when we are testing. Having just read an excellent book called Bad Science by Ben Goldache I thought I would revisit this subject since Ben has a whole chapter on this very subject called

‘Why Clever People Believe Stupid Things’

It is a very interesting chapter and it made me re-think about the need to be careful when we are testing and reporting what we believe has happened. The human mind is a tricky beast and there are various methods it uses to try and trick us into believing things which are not true.

For example take a look at the following picture by French artist Felice Varini (the site is in French) This is a fantastic anamorphic illusion in which our mind joins all the pieces together to make us see something that in reality is not real.




Looking at it from a different perspective shows us this.




An important lesson in testing is not to look at things from only one point of view. See how our mind tricks us in to thinking something is real when it is not.

Ben Goldache manages to breakdown some of the common tricks our mind plays into the following:

# Randomness
# Regression to the Mean
# The bias towards positive evidence
# Biased by our prior beliefs
# Availability
# Social influences

Which he concludes with the following statements

1 - We see patterns where there is only random noise.
2 - We see causal relationships where there are non
3 - We overvalue confirmatory information for any given hypothesis.
4 - We seek out confirmatory information for any given hypothesis.
5 - Our assessment of the quality of new evidence is biased by our previous beliefs.
6 - Our assessment of the quality of new evidence is biased by our social influences.

(I added the 6th one myself)

Once we become aware of these illusions that our mind plays on us we can start to put practices in place they helps to try and remove them. I should warn you it is impossible to remove them entirely since we are only human after all, but being aware that they exist is a good start.

Over the next few blog articles I will be taking each one of these topics and applying it to testing

Tuesday, 27 July 2010

DANGER - Confirmation Bias

In my previous blog I touched upon a term called Confirmation Bias and how as testers we should be aware of this. I stated that I would put a blog together on the subject so here it is.

I should start by defining what confirmation bias is.

Confirmation bias refers to a type of selective thinking whereby one tends to notice and to look for what confirms one's beliefs, and to ignore, not look for, or undervalue the relevance of what contradicts one's beliefs:- http://www.skepdic.com/confirmbias.html

The reason I started to look more into confirmation bias was due to the following article in Ars Technica - http://arstechnica.com/science/news/2010/07/confirmation-bias-how-to-avoid-it.ars

A good example of this is if you are thinking of buying a new car and all of a sudden you seem to notice lots and lots of the model of the car you was thinking of purchasing. You mind is conditioning itself to notice this make and model of car and making you notice them more, even if there are no more than there was before – you appear to be seeing them everywhere.

Another example is if you start talking to a friend about a certain film and actor and then suddenly notice lots of coincidences, the actor is on a advert, the film is being shown again on TV, a support actor is in another film you just started to watch. The following gives a good example of this. http://youarenotsosmart.com/2010/06/23/confirmation-bias/

If there was no such thing as confirmation bias there would be no conspiracy theories. Conspiracy theories are based upon information which proves the theory correct; those who believe in the theory ignore the evidence that debunks that theory.

So why is there any concern for testers?

Let us start with an example.

You are working closely with the development team and you start to ask them questions about the release you are about to test. You ask their viewpoint on which areas they feel are the most risky and which they feel are the most – so you can adjust your priorities as required, a pretty standard exchange between developers and testers. You now start testing beginning with the area of high risk and work your way to the low risk areas.

You find a few serious bugs in the high risk areas (as expected) and you find no problems in the low risk areas.

After release a major bug is reported in the low risk area you tested. How did you miss the bug? Did you see the bug but your thinking was that everything was working alright? Did confirmation bias play a part? Did your subconscious hide the bug from you? Now this gets very scary, most people who work in software testing know that some bugs try to hide from you, we expect them to hide in the software. What happens if they decide to hide in your brain?

So how can we try and prevent confirmation bias?

The quick and easy way to try and prevent confirmation bias is to ensure that more than one tester tests the same feature, they may bring in their own confirmation bias but hopefully it will be different from the previous testers bias. There is more chance that it will be different if the testers have not discussed the area under test beforehand.

Another way to try and prevent confirmation bias is to do ‘paired testing’ either with a software engineer, another tester or a user. That way you can question each other with regards to what is true and what is false. There is a chance that you could cross contaminate each other with your own confirmation bias, but the risk should be less than if your are working on your own.

It is not easy to remove confirmation bias since it is infectious. The way of working on a software development project requires testers to communicate more and more with other areas of the business and at each stage and with each conversation confirmation bias could be introduced.

So should we lock ourselves away in a dark room with no communication with anyone else on the team? I think I would get out of testing as a career if that happened, the Social Tester (@Rob_Lambert) would now be the anti-social tester, time to get him a ASBO (For our non-UK readers - http://en.wikipedia.org/wiki/Anti-Social_Behaviour_Order)

My view is that there is no realistic way to prevent confirmation bias due to the way software development projects work and that there is a need for everyone to be able to communicate with each other. However if testers are aware that there is such a thing as confirmation bias then they can try and take steps to ensure it does not creep into their testers. That is the whole concept and point of this blog – to help to raise awareness of confirmation bias and how it can effect your testing.