Friday, 23 March 2012

More randomness

I have just finished reading the excellent book ‘The Drunkard's Walk: How Randomness Rules Our Lives by Leonard Mlodinow and found lots of useful bits of information that could relate to what we could experience when testing. The premise of the book (spoiler alert ----) is that randomness affects our lives all of the time. It talks about why do some people become very successful when others who have similar talents are not as successful? It explains about our natural ability to form patterns where no pattern exists. It gives examples of how we form mental relationships between independent events where there is no relationship (better known as regression towards the mean) It is a really interesting book and ties in with my previous posts on how our cognitive biases can easily fool us and the connection to testing.

One of the best lessons I got from the book was when we have formed a theory how do we prove that theory to be correct or not.

The example used is as follows (see how well you do)

“Suppose I tell you that I have made up a rule for the construction of a sequence of three numbers and that the sequence 2, 4, 6 satisfies my rule. Can you guess the rule? A single set of three numbers is not a lot to go on, so let’s pretend that if you present me with other sequences of three numbers, I will tell you whether or not they satisfy my rule. Please take a moment to think up some three-number sequences to test


Now that you have pondered your strategy, I can say that if you are like most people, the sequences you present will look something like 4, 6, 8 or 8, 10, 12 or 20, 24, 30. Yes, those sequences obey my rule. So what’s the rule? Most people, after presenting a handful of such test cases, will grow confident and conclude that the rule is that the sequence must consist of increasing even numbers. But actually my rule was simply that the series must consist of increasing numbers. The sequence 1, 2, 3, for example, would have fit; there was no need for the numbers to be even. Would the sequences you thought of have revealed this?”

Did you note that the author used the term “test case” – these are like little test cases to prove a theory or idea you have. The author talks in great depth about why people get this wrong most of the time. The reasoning being that once we form an idea or theory we search for ways to prove our idea is correct rather than prove the idea is wrong. There are many more ways to prove a theory is wrong rather than it is right. This is called confirmation bias and I talked about this in my blog here

Does this sound familiar to what we do in testing? We do so much to prove the requirements are correct when really we should be trying to prove that the requirements are wrong. To me within the world of software testing there is a great deal of trying to confirm (validate) what we already know about the software rather than try to ‘test’ what we do not know. Having read this book I can see why it is so easy to fall into this trap, in most cases our natural instinct is to look for positive ways to prove our theory correct, rather than try and disprove them.

When we test it appears as if we are fighting our natural instincts and we can get feelings of uneasiness and hence why some people may appear to struggle to adopt a more exploratory testing approach and find it difficult to move from a confirmation style of testing (checking) towards a more asking questions in which I do not know the answer style of testing. This feeling is commonly known as Cognitive Dissonance ( I blogged about this here.

If we can understand that we as testers should feel uneasy and that it is part of our remit to fight our natural instincts we can then use it as a tool when testing to improve how we test.

The example used in the book is in my opinion a great example of what testing is and is one of the take-a-ways from the book.

Thursday, 23 February 2012

Patterns from Nothing

Continuing on from my previous post on Cognitive Illusions I thought I would start with the ability we have as humans to be fooled into seeing patterns from nothing. It is common for people to find shapes or objects when starting at the clouds or to think that there is a pattern of luck associated with the game we are playing. We can look at a random set of data and without doubt we will naturally make a pattern. Ben Goldache talks about this in his book Bad Science[1] It is in our nature and we are over sensitive to making patterns when none exist. Look at the following example of tosses of a coin, H-Heads, T-Tails.

HHHHHHHHHHHTHHHHHHHHHHHH

Now what conclusion would you make from this set of results?

Have you come up with any?

If you have come up with a conclusion that is your natural instinct and intuition to create a pattern and a cognitive illusion. Given that the coin is true the chances that the sequence above would happen is the same as any other sequence. Take this one step further and I ask you to say what the result will be on the next coin toss.

What would you answer?

Why would this be your answer?

Using statistics the possibility of it being H or T is 50/50 or equal chance. This is the reason casinos make so much money they know we are all fallible and use that against us. We make the mistake that there is a pattern and that our luck must change. I am sad to inform you but there is no luck the chances are still the same and within a casino the odds will always be against you.

The same can be applied to those who follow sport and come across the phrase that someone is on a lucky streak; this again is our natural bias to create a pattern when none exist. For example a soccer player has the follow goal scoring record. (X means scored in the game – O – means did not score.

XXXXXOOXXXXXXXXXXOOOOOOOO

Our tendency to create a pattern means that we will take that data and say the player has had 2 lucky streaks of scoring and is currently having a dip in form. With such simple data it is so easy to create and formulate assumptions and make patterns where there is no pattern. This is especially easy to do if there is no context. The simple example above proves the need to have some context. If I gave some more information that the player above has for the last ten games been playing in the senior side instead of the juniors, would that make a difference to your conclusion?

So how does this apply to testing?

There is a talk within testing that we should trust our intuition (I am one of these people to talk about this) and go with our gut feelings. Malcolm Gladwell in his book Blink [2] describes this to great effect. However we need to be aware that our intuition can try and fool us and try to create patterns when we are carrying out our testing. The problems come when we start to see these patterns and this causes us to miss other information that may be important.

For an example of this watch the following video (Information provided by Gordon Pitz [3] )



Have you watched the video?

No?

Please go and watch it, it will help you understand the rest of this article.







Did you see the Gorilla? [4]

No?

This might be due to being distracted and focused on a task. Noticing patterns and forming inconclusive assumptions when there are none can cause the same effects and as such it does show the point that our minds can be easily distracted and miss important information. It is important when we are testing that we do not spend too much of our time looking and investigating patterns since out natural instinct is to see patterns we could end up missing a lot more important information.

This is vital when we are testing using the exploratory testing approach where it is very easy to go off track and away from our mission to investigate what we think is a pattern of behaviour within the system under test. It is best in these situations to make a note of it and continue on track.

Sometimes it is difficult to go against what is natural and some find it near impossible and this could be one of the reasons why the exploratory testing approach may not be suitable for them or they find it too difficult. I hope that this article will encourage those who have struggled to have another go knowing that sometimes that could be fighting against their own instincts and as such making it appear more difficult for them.

So are there are techniques that can be used to help resolve this bias?

The problem is that since this is a natural built in instinct, and because we are aware of it, it does not necessarily mean we can resolve it.

“Knowing that it exists does not remove it”
Gordon Pitz [3]

There are few techniques that could help

One previous described when using session based testing is to keep to your mission and make a note of interesting patterns that you think are emerging. Later when you do a feedback session to others explain your thoughts about the pattern and see if others see the same pattern. If they do not it could be a case that you see a pattern when there is none
.
Another way which may help to prevent this bias is to use paired testing, there is gathering evidence that social facilitation [5] can help to reduce cognitive bias and paired testing is one way to make use of social facilitation. We seem to be more attentive and aware when we are being observed. It should be used with caution since if the task is complex and difficult people will perform badly. So this can only really be used when the task is not over complex.

One more technique that I have found invaluable is the use of testing framing as mentioned by Michael Bolton [6]. I attended a course on this and I do recommend that people read the article on his website. Using this approach helps the tester to focus on the purpose of the test but it also has a cool side effect that it can help to remove this bias to see patterns when there are none. It works especially well when you have to justify your reasoning.

The next article will look at the cognitive illusion of regression to the mean and its possible impact on testing.

References:

[3] The Deceptive Nature of Intuition – Gordon Pitz- http://www.unc.edu/~gpitz/pdf/Chabris-Simons%20review.pdf
[4] The invisible Gorilla - Christopher Chabris and Daniel Simons - http://www.amazon.com/Invisible-Gorilla-How-Intuitions-Deceive/dp/0307459667
[6] Test framing – Michael Bolton - http://www.developsense.com/blog/2011/05/ive-been-framed/








Wednesday, 8 February 2012

Cognitive Illusions

or how your mind plays tricks on you.

People who regularly read my blog may be aware that I have a keen interest in psychology and how it can relate to testing. If you have not read my blog before wow welcome first timer I hope you enjoy and come back for more articles in the future.

I have in the past written a few articles about bias (here, here, here and here) and how it can be dangerous when we are testing. Having just read an excellent book called Bad Science by Ben Goldache I thought I would revisit this subject since Ben has a whole chapter on this very subject called

‘Why Clever People Believe Stupid Things’

It is a very interesting chapter and it made me re-think about the need to be careful when we are testing and reporting what we believe has happened. The human mind is a tricky beast and there are various methods it uses to try and trick us into believing things which are not true.

For example take a look at the following picture by French artist Felice Varini (the site is in French) This is a fantastic anamorphic illusion in which our mind joins all the pieces together to make us see something that in reality is not real.




Looking at it from a different perspective shows us this.




An important lesson in testing is not to look at things from only one point of view. See how our mind tricks us in to thinking something is real when it is not.

Ben Goldache manages to breakdown some of the common tricks our mind plays into the following:

# Randomness
# Regression to the Mean
# The bias towards positive evidence
# Biased by our prior beliefs
# Availability
# Social influences

Which he concludes with the following statements

1 - We see patterns where there is only random noise.
2 - We see causal relationships where there are non
3 - We overvalue confirmatory information for any given hypothesis.
4 - We seek out confirmatory information for any given hypothesis.
5 - Our assessment of the quality of new evidence is biased by our previous beliefs.
6 - Our assessment of the quality of new evidence is biased by our social influences.

(I added the 6th one myself)

Once we become aware of these illusions that our mind plays on us we can start to put practices in place they helps to try and remove them. I should warn you it is impossible to remove them entirely since we are only human after all, but being aware that they exist is a good start.

Over the next few blog articles I will be taking each one of these topics and applying it to testing

Friday, 13 January 2012

The Purpose of Testing

I have a strong passion for psychology and the social sciences and their connection to software testing. I currently have a few books on the go on these subjects and hope to write up my thoughts on these books and their connection to testing in the future within the blog.
For those that are interested the books I am currently reading (and re-reading – to make sure that things I assumed from within the books are correct) are:

One interesting quote I found in the book by Ian Dey was the following:

Exploration, description and explanation are the three purposes of social science research.
Earl Babbie

Looking at this quote made me think about what the purposes of testing are and I came to the conclusion that this is the same as for social science research as quoted by Earl Babbie.

If we break this down we have:

  • Exploration: This is done using exploratory testing, charters, missions etc
  • Description – Let us describe what we are doing and what we have done when testing
  • Explanation – Let us explain to managers, peers, stakeholders what we found when testing and our findings.

There are lots of articles, discussions, books about the purpose of testing and how very complex it is, this single sentence quote in my opinion sums up everything about the purpose of testing

I made a note to have a look at what Earl Babbie has got to say and found he has written lots of articles and books some of which could/may apply to software testing, it looks like I have added a few more books to my ever expanding reading list.

Friday, 9 December 2011

Apprenticeship schemes at Test Conferences

A quick blog on a thought I have had.

I read an article today about how we could try and fix the IT skills gap that exists within the UK, this may also apply around the world, by getting young adults into apprenticeships. I have a view that for some people academia study is not for them and they would better suited to a vocational training course instead of a university degree. I never went to university and as such I do not have a degree. Do I feel as if I have missed out? I do not think so but I have not experienced university life so cannot be sure if I missed out something I may have liked.

I think within our profession of testing we have an opportunity to mentor and help create the next generation of testers (not discounting coders, architects) and allowing them to build their skills and knowledge up by learning from experience rather than studying non relevant subjects at university (How many universities do testing as a degree?) As Nassim Nicholas Taleb has said we as human beings are far better at learning from doing rather than from books. I have over the past year been mentoring two people in our craft of testing one is still on-going the other has managed to secure a tester role within a company, neither have been involved in testing beforehand. I feel we within our community should be trying to do this and encourage young adults by maybe taking them under our tutorage, it does not require a large amount of personal investment, a few hours per week. Or maybe within our companies we should all start looking at trying to introduce apprenticeship schemes, let’s try to tap into this vast resource who in my opinion feels they have been abandoned by the educational system.

On the other side I want to call out to those who run conferences, EuroSTAR, CAST, Lets test, UNICOM and say let’s advertise for young adults who may have an interest to come along as an apprentice for the length of the conference. They would not pay a fee but would be expected to produce a report on their thoughts and what actions they intend to take away for the future. I am have not finalized these thoughts but it would give these young adults to get engagement in a craft which I myself feel very passionate about.

Maybe the organizations that run the conferences could look at running an apprenticeship competition, vetting process. I am sure there are many vocational colleges (Both UK and around the world) who would be willing to get involved in this. It has the added effect that it will start to raise in the minds of the next generation of influential people the value of testing and put testing out there as a forwarding thinking craft that people want to get involved with.

What do others think?

I would especially love some feedback from conference organizers to see how feasible these ideas are.

Thursday, 8 December 2011

Recipe Knowledge

This is my response to an blog article written by Paul Gerrard.

http://gerrardconsulting.com/index.php?q=node/599

I was going to post this as a comment but thought it would be better as a separate blog article

.I am not sure if I agree with entirely what Paul was saying but that is the point of the good blog article. I will say I do entirely agree with his conclusion that we have to have our eyes open and your brain switched on. There are methods that can be used to prevent the ‘quitting’ process and the rambling around in the dark approach to exploratory testing but I think that would be an entirely different article.

However I would suggest people try searching for article on avoiding (or being aware of) bias, cognitive research methods, focusing and defocusing skills. Another one to look at is Air Traffic Control work patterns; they work in time boxed shifts is this similar to session based testing?The point I want to make is the issue Pauls makes about domain knowledge and the usefulness that scripts may bring is an important one.

I am not in the camp that we should abandoned scripts and a lot of the people I communicate with are not saying that either. I feel there are a lot of Chinese whispers with regards to views of some people on the use of scripted tests. I cannot recall anyone saying to me that we must abandon scripts in favour of just doing exploratory testing (Is that a bias and I am deliberately missing or not noticing information?) We also can train ourselves not to quit using a variety of cognitive processes especially the use of checklists and heuristics. These ‘tools’ enable us to counter the quitting instinct but triggering new paths, observations and comparisons.

Testing is not just about finding things it is about asking questions and forming theories based on the answers (evidence) given while experiencing the software. This may lead to more questions and further evaluation and even more re-evaluation of what you already thought debunking and disproving your theories. Finding bugs is a side effect of this approach, a very useful side effect but it is not the sole purpose of testing.

There is a term used within society (especially the social science community) which is ‘recipe’ knowledge, this is often devalued by academics since it is a step by step instruction for learning something. In the everyday world context recipes tell you what to use, what you will need (ingredients) and exactly what procedures to follow, this sound familiar to scripts in the testing world. These recipes can provide important foundations for acquiring or developing skills or as we would say in the software development world learning domain knowledge? People using a recipe as we know may not follow it exactly, they may taste the product and adjust it for their own personal taste so the will move away from the script. However we should not pretend that learning a recipe is the same as learning a skill.

If we for example look at baking, this requires a ‘knack’ which can only come from experience (if you have tried baking bread you will understand this) like qualitative analysis, baking also permits creativity and the development of your own styles.

The skilled tester at some point, like the experienced chef, may stop using the recipe book and start to experiment and explore different tastes and ways to discover more and hopeful improve their skills. At the same time the recipe (script) remains a useful tutorial for the newcomer to the art.

Some of the content used here is taken from the following book:

Qualitative Data Analysis: A User-friendly Guide for Social Scientists - Ian Dey

Wednesday, 7 December 2011

A ‘title’ is of value to someone who matters

Recently I attended the Eurostar Testing Conference in Manchester and came away with some mixed messages and thoughts about the content of the conference. Some of the presentations and tracks were really good whilst others appeared to repeat the same old information. I hope to write a few blog articles on some of the positive messages I got from the conference along with lots of ideas I have with regards to social science and how it can be used within testing, these may have to wait until after the holiday period.

The reason for writing this blog post is due to the, what appeared to be a negative, message coming from some of the key note presentations, this is my opinion and how I understood the messages in the context of my views on testing and testers. The one point I wish to raise (and maybe rant) is one of the messages that James Whittaker made.

“at Google ‘Tester’ has disappeared from people’s job titles. People who were ‘testers’ are now ‘developers’ and are expected to code regularly”

Now my thoughts on this may be taking the point James was making out of context however I am not sure in what other context this could be made.

James during the presentation made the point that testers should be part of the team and not get bogged down in who has what role and I whole hearty agree with that.

However from a social and status perspective people need to be able to identify with a title and there has been a lot of talk within the development community about removing titles especially the title of tester. Take the following scenario:

You go out on a social evening with a group of friends and their partners would you work with a project manager, a developer, a business analyst and a tester, As the evening proceeds each person is asked by a non-team member what they do at work.

The developer could reply: I write code and create applications

The tester could reply that they test to ensure the system works

The project manager could reply that they make sure everyone knows what target they have to meet

The Business analyst could say they provide information on what the customers who will use the application need

Each person answering this question I would say would be proud of their job title and what they do.

So my take on making a statement in which we say get rid of the of the title of tester and call everyone a developer is a little insulting and makes me personally feel unappreciated and unvalued. I feel I have been working as a tester for a long period of time now and whilst I can understand that within a team people can have a variety of roles and responsibilities why should I have to give up something that I feel passionate about? I wonder what would be said if at a developers conference everyone is now going to be called a business analyst since we all provide something that the customer wants.

Why does everyone have to be a developer within a project? My concern is why has the word ‘tester’ become such a dirty word? It is if we should be ashamed of what we are and what our title is.

I AM A TESTER AND PROUD OF IT!