Thursday, 23 February 2012

Patterns from Nothing

Continuing on from my previous post on Cognitive Illusions I thought I would start with the ability we have as humans to be fooled into seeing patterns from nothing. It is common for people to find shapes or objects when starting at the clouds or to think that there is a pattern of luck associated with the game we are playing. We can look at a random set of data and without doubt we will naturally make a pattern. Ben Goldache talks about this in his book Bad Science[1] It is in our nature and we are over sensitive to making patterns when none exist. Look at the following example of tosses of a coin, H-Heads, T-Tails.

HHHHHHHHHHHTHHHHHHHHHHHH

Now what conclusion would you make from this set of results?

Have you come up with any?

If you have come up with a conclusion that is your natural instinct and intuition to create a pattern and a cognitive illusion. Given that the coin is true the chances that the sequence above would happen is the same as any other sequence. Take this one step further and I ask you to say what the result will be on the next coin toss.

What would you answer?

Why would this be your answer?

Using statistics the possibility of it being H or T is 50/50 or equal chance. This is the reason casinos make so much money they know we are all fallible and use that against us. We make the mistake that there is a pattern and that our luck must change. I am sad to inform you but there is no luck the chances are still the same and within a casino the odds will always be against you.

The same can be applied to those who follow sport and come across the phrase that someone is on a lucky streak; this again is our natural bias to create a pattern when none exist. For example a soccer player has the follow goal scoring record. (X means scored in the game – O – means did not score.

XXXXXOOXXXXXXXXXXOOOOOOOO

Our tendency to create a pattern means that we will take that data and say the player has had 2 lucky streaks of scoring and is currently having a dip in form. With such simple data it is so easy to create and formulate assumptions and make patterns where there is no pattern. This is especially easy to do if there is no context. The simple example above proves the need to have some context. If I gave some more information that the player above has for the last ten games been playing in the senior side instead of the juniors, would that make a difference to your conclusion?

So how does this apply to testing?

There is a talk within testing that we should trust our intuition (I am one of these people to talk about this) and go with our gut feelings. Malcolm Gladwell in his book Blink [2] describes this to great effect. However we need to be aware that our intuition can try and fool us and try to create patterns when we are carrying out our testing. The problems come when we start to see these patterns and this causes us to miss other information that may be important.

For an example of this watch the following video (Information provided by Gordon Pitz [3] )



Have you watched the video?

No?

Please go and watch it, it will help you understand the rest of this article.







Did you see the Gorilla? [4]

No?

This might be due to being distracted and focused on a task. Noticing patterns and forming inconclusive assumptions when there are none can cause the same effects and as such it does show the point that our minds can be easily distracted and miss important information. It is important when we are testing that we do not spend too much of our time looking and investigating patterns since out natural instinct is to see patterns we could end up missing a lot more important information.

This is vital when we are testing using the exploratory testing approach where it is very easy to go off track and away from our mission to investigate what we think is a pattern of behaviour within the system under test. It is best in these situations to make a note of it and continue on track.

Sometimes it is difficult to go against what is natural and some find it near impossible and this could be one of the reasons why the exploratory testing approach may not be suitable for them or they find it too difficult. I hope that this article will encourage those who have struggled to have another go knowing that sometimes that could be fighting against their own instincts and as such making it appear more difficult for them.

So are there are techniques that can be used to help resolve this bias?

The problem is that since this is a natural built in instinct, and because we are aware of it, it does not necessarily mean we can resolve it.

“Knowing that it exists does not remove it”
Gordon Pitz [3]

There are few techniques that could help

One previous described when using session based testing is to keep to your mission and make a note of interesting patterns that you think are emerging. Later when you do a feedback session to others explain your thoughts about the pattern and see if others see the same pattern. If they do not it could be a case that you see a pattern when there is none
.
Another way which may help to prevent this bias is to use paired testing, there is gathering evidence that social facilitation [5] can help to reduce cognitive bias and paired testing is one way to make use of social facilitation. We seem to be more attentive and aware when we are being observed. It should be used with caution since if the task is complex and difficult people will perform badly. So this can only really be used when the task is not over complex.

One more technique that I have found invaluable is the use of testing framing as mentioned by Michael Bolton [6]. I attended a course on this and I do recommend that people read the article on his website. Using this approach helps the tester to focus on the purpose of the test but it also has a cool side effect that it can help to remove this bias to see patterns when there are none. It works especially well when you have to justify your reasoning.

The next article will look at the cognitive illusion of regression to the mean and its possible impact on testing.

References:

[3] The Deceptive Nature of Intuition – Gordon Pitz- http://www.unc.edu/~gpitz/pdf/Chabris-Simons%20review.pdf
[4] The invisible Gorilla - Christopher Chabris and Daniel Simons - http://www.amazon.com/Invisible-Gorilla-How-Intuitions-Deceive/dp/0307459667
[6] Test framing – Michael Bolton - http://www.developsense.com/blog/2011/05/ive-been-framed/








Wednesday, 8 February 2012

Cognitive Illusions

or how your mind plays tricks on you.

People who regularly read my blog may be aware that I have a keen interest in psychology and how it can relate to testing. If you have not read my blog before wow welcome first timer I hope you enjoy and come back for more articles in the future.

I have in the past written a few articles about bias (here, here, here and here) and how it can be dangerous when we are testing. Having just read an excellent book called Bad Science by Ben Goldache I thought I would revisit this subject since Ben has a whole chapter on this very subject called

‘Why Clever People Believe Stupid Things’

It is a very interesting chapter and it made me re-think about the need to be careful when we are testing and reporting what we believe has happened. The human mind is a tricky beast and there are various methods it uses to try and trick us into believing things which are not true.

For example take a look at the following picture by French artist Felice Varini (the site is in French) This is a fantastic anamorphic illusion in which our mind joins all the pieces together to make us see something that in reality is not real.




Looking at it from a different perspective shows us this.




An important lesson in testing is not to look at things from only one point of view. See how our mind tricks us in to thinking something is real when it is not.

Ben Goldache manages to breakdown some of the common tricks our mind plays into the following:

# Randomness
# Regression to the Mean
# The bias towards positive evidence
# Biased by our prior beliefs
# Availability
# Social influences

Which he concludes with the following statements

1 - We see patterns where there is only random noise.
2 - We see causal relationships where there are non
3 - We overvalue confirmatory information for any given hypothesis.
4 - We seek out confirmatory information for any given hypothesis.
5 - Our assessment of the quality of new evidence is biased by our previous beliefs.
6 - Our assessment of the quality of new evidence is biased by our social influences.

(I added the 6th one myself)

Once we become aware of these illusions that our mind plays on us we can start to put practices in place they helps to try and remove them. I should warn you it is impossible to remove them entirely since we are only human after all, but being aware that they exist is a good start.

Over the next few blog articles I will be taking each one of these topics and applying it to testing

Friday, 13 January 2012

The Purpose of Testing

I have a strong passion for psychology and the social sciences and their connection to software testing. I currently have a few books on the go on these subjects and hope to write up my thoughts on these books and their connection to testing in the future within the blog.
For those that are interested the books I am currently reading (and re-reading – to make sure that things I assumed from within the books are correct) are:

One interesting quote I found in the book by Ian Dey was the following:

Exploration, description and explanation are the three purposes of social science research.
Earl Babbie

Looking at this quote made me think about what the purposes of testing are and I came to the conclusion that this is the same as for social science research as quoted by Earl Babbie.

If we break this down we have:

  • Exploration: This is done using exploratory testing, charters, missions etc
  • Description – Let us describe what we are doing and what we have done when testing
  • Explanation – Let us explain to managers, peers, stakeholders what we found when testing and our findings.

There are lots of articles, discussions, books about the purpose of testing and how very complex it is, this single sentence quote in my opinion sums up everything about the purpose of testing

I made a note to have a look at what Earl Babbie has got to say and found he has written lots of articles and books some of which could/may apply to software testing, it looks like I have added a few more books to my ever expanding reading list.

Friday, 9 December 2011

Apprenticeship schemes at Test Conferences

A quick blog on a thought I have had.

I read an article today about how we could try and fix the IT skills gap that exists within the UK, this may also apply around the world, by getting young adults into apprenticeships. I have a view that for some people academia study is not for them and they would better suited to a vocational training course instead of a university degree. I never went to university and as such I do not have a degree. Do I feel as if I have missed out? I do not think so but I have not experienced university life so cannot be sure if I missed out something I may have liked.

I think within our profession of testing we have an opportunity to mentor and help create the next generation of testers (not discounting coders, architects) and allowing them to build their skills and knowledge up by learning from experience rather than studying non relevant subjects at university (How many universities do testing as a degree?) As Nassim Nicholas Taleb has said we as human beings are far better at learning from doing rather than from books. I have over the past year been mentoring two people in our craft of testing one is still on-going the other has managed to secure a tester role within a company, neither have been involved in testing beforehand. I feel we within our community should be trying to do this and encourage young adults by maybe taking them under our tutorage, it does not require a large amount of personal investment, a few hours per week. Or maybe within our companies we should all start looking at trying to introduce apprenticeship schemes, let’s try to tap into this vast resource who in my opinion feels they have been abandoned by the educational system.

On the other side I want to call out to those who run conferences, EuroSTAR, CAST, Lets test, UNICOM and say let’s advertise for young adults who may have an interest to come along as an apprentice for the length of the conference. They would not pay a fee but would be expected to produce a report on their thoughts and what actions they intend to take away for the future. I am have not finalized these thoughts but it would give these young adults to get engagement in a craft which I myself feel very passionate about.

Maybe the organizations that run the conferences could look at running an apprenticeship competition, vetting process. I am sure there are many vocational colleges (Both UK and around the world) who would be willing to get involved in this. It has the added effect that it will start to raise in the minds of the next generation of influential people the value of testing and put testing out there as a forwarding thinking craft that people want to get involved with.

What do others think?

I would especially love some feedback from conference organizers to see how feasible these ideas are.

Thursday, 8 December 2011

Recipe Knowledge

This is my response to an blog article written by Paul Gerrard.

http://gerrardconsulting.com/index.php?q=node/599

I was going to post this as a comment but thought it would be better as a separate blog article

.I am not sure if I agree with entirely what Paul was saying but that is the point of the good blog article. I will say I do entirely agree with his conclusion that we have to have our eyes open and your brain switched on. There are methods that can be used to prevent the ‘quitting’ process and the rambling around in the dark approach to exploratory testing but I think that would be an entirely different article.

However I would suggest people try searching for article on avoiding (or being aware of) bias, cognitive research methods, focusing and defocusing skills. Another one to look at is Air Traffic Control work patterns; they work in time boxed shifts is this similar to session based testing?The point I want to make is the issue Pauls makes about domain knowledge and the usefulness that scripts may bring is an important one.

I am not in the camp that we should abandoned scripts and a lot of the people I communicate with are not saying that either. I feel there are a lot of Chinese whispers with regards to views of some people on the use of scripted tests. I cannot recall anyone saying to me that we must abandon scripts in favour of just doing exploratory testing (Is that a bias and I am deliberately missing or not noticing information?) We also can train ourselves not to quit using a variety of cognitive processes especially the use of checklists and heuristics. These ‘tools’ enable us to counter the quitting instinct but triggering new paths, observations and comparisons.

Testing is not just about finding things it is about asking questions and forming theories based on the answers (evidence) given while experiencing the software. This may lead to more questions and further evaluation and even more re-evaluation of what you already thought debunking and disproving your theories. Finding bugs is a side effect of this approach, a very useful side effect but it is not the sole purpose of testing.

There is a term used within society (especially the social science community) which is ‘recipe’ knowledge, this is often devalued by academics since it is a step by step instruction for learning something. In the everyday world context recipes tell you what to use, what you will need (ingredients) and exactly what procedures to follow, this sound familiar to scripts in the testing world. These recipes can provide important foundations for acquiring or developing skills or as we would say in the software development world learning domain knowledge? People using a recipe as we know may not follow it exactly, they may taste the product and adjust it for their own personal taste so the will move away from the script. However we should not pretend that learning a recipe is the same as learning a skill.

If we for example look at baking, this requires a ‘knack’ which can only come from experience (if you have tried baking bread you will understand this) like qualitative analysis, baking also permits creativity and the development of your own styles.

The skilled tester at some point, like the experienced chef, may stop using the recipe book and start to experiment and explore different tastes and ways to discover more and hopeful improve their skills. At the same time the recipe (script) remains a useful tutorial for the newcomer to the art.

Some of the content used here is taken from the following book:

Qualitative Data Analysis: A User-friendly Guide for Social Scientists - Ian Dey

Wednesday, 7 December 2011

A ‘title’ is of value to someone who matters

Recently I attended the Eurostar Testing Conference in Manchester and came away with some mixed messages and thoughts about the content of the conference. Some of the presentations and tracks were really good whilst others appeared to repeat the same old information. I hope to write a few blog articles on some of the positive messages I got from the conference along with lots of ideas I have with regards to social science and how it can be used within testing, these may have to wait until after the holiday period.

The reason for writing this blog post is due to the, what appeared to be a negative, message coming from some of the key note presentations, this is my opinion and how I understood the messages in the context of my views on testing and testers. The one point I wish to raise (and maybe rant) is one of the messages that James Whittaker made.

“at Google ‘Tester’ has disappeared from people’s job titles. People who were ‘testers’ are now ‘developers’ and are expected to code regularly”

Now my thoughts on this may be taking the point James was making out of context however I am not sure in what other context this could be made.

James during the presentation made the point that testers should be part of the team and not get bogged down in who has what role and I whole hearty agree with that.

However from a social and status perspective people need to be able to identify with a title and there has been a lot of talk within the development community about removing titles especially the title of tester. Take the following scenario:

You go out on a social evening with a group of friends and their partners would you work with a project manager, a developer, a business analyst and a tester, As the evening proceeds each person is asked by a non-team member what they do at work.

The developer could reply: I write code and create applications

The tester could reply that they test to ensure the system works

The project manager could reply that they make sure everyone knows what target they have to meet

The Business analyst could say they provide information on what the customers who will use the application need

Each person answering this question I would say would be proud of their job title and what they do.

So my take on making a statement in which we say get rid of the of the title of tester and call everyone a developer is a little insulting and makes me personally feel unappreciated and unvalued. I feel I have been working as a tester for a long period of time now and whilst I can understand that within a team people can have a variety of roles and responsibilities why should I have to give up something that I feel passionate about? I wonder what would be said if at a developers conference everyone is now going to be called a business analyst since we all provide something that the customer wants.

Why does everyone have to be a developer within a project? My concern is why has the word ‘tester’ become such a dirty word? It is if we should be ashamed of what we are and what our title is.

I AM A TESTER AND PROUD OF IT!

Friday, 4 November 2011

Defining Testing

I am about to run a couple of internal workshops on the Exploratory Testing approach which is based upon a lot of work done by Michael Bolton and James Bach. One of the concerns I have been having recently is what people within the organisation think testing is in comparison to what they actually doing. So I started to put together an article looking at these concerns and trying to see if there is a problem. This blog is based upon some of the points that I cover in the article.
WARNING
Disclaimer:

The views and definitions expressed in this article are my own and as such they may not match what a dictionary may say or agree with your views/definitions.

When I start to look at what we see as testing activities they appear to fall into three distinct categories:
  • Validation
  • Verification
  • Testing

These terms may be familiar to some of the older readers of this blog. V V & T has been around for a long time and has it origins within the manufacturing industry. It has been the main process to provide quality control and assurance of manufacturing production lines. (http://en.wikipedia.org/wiki/Verification_and_validation)

It appears that these ‘manufacturing’ processes have been applied to software testing (http://en.wikipedia.org/wiki/Verification_and_Validation_(software))

This seemed to have lead to the appearance of process standards initially the ISO 9000 Quality Assurance standard, which was modified to become the ISO 9001 2008 standard which included software. These standards are very closely linked to manufacturing process and from a software testing perspective the quality control methods.

Talking to and observing various companies I have seen that lot of people’s perception of testing is as shown in the photo below.

http://brigitteofseon.wordpress.com/category/work-hard-no-go-slow/

Is it a problem to have this perception of software testing?

At the beginning of my career in software testing a lot of companies started to change from mainly hardware manufactures to both hardware and software manufactures. There was a need among these companies to have processes they could use to prove the ‘quality’ of their software products and the general consensus was what had worked in quality control for hardware surely could be applied to software testing.

The reasoning behind these was based upon some fairly flawed assumptions:

  • All software was the same
  • All software worked in the same way
  • All users would follow the designed work flows.
  • All users would behave in the same way

The main focus of these processes was to validate and verify what was already known about the product and its expected inputs and outputs. In my opinion following quality control and assurance processes is not really about testing Testers ‘normally’ do not control the quality (Yes there are approaches such as TDD which ‘may’ help). If there is crap in the system you are testing then there will be still crap in the system afterwards. Testers provide a service telling you there is crap in the system. Michael Bolton talks more about getting out of the QA business here

Validation and verification

will in the majority of cases NOT

Tell us anything new about the product

Make us ask questions of the product

So what do I mean when I talk about validation and verification?

Validation:

To me validation is about proving what you already know about the product. Confirming what the requirements say are correct and that the system is correct in accordance to what you believe it should be. The normal response when validating will be:

  • true or false
  • yes or no
  • 0 or 1

I see Validation as a checking exercise (See article by Michael Bolton here on testing v checking) rather than a testing exercise, it still has some value within the testing approach but it will not tell you anything new about the system being tested. It will prove that what you already know about the system is correct and working (or not working) this is like testing requirements or validation of fields in a database/GUI – you know what the input is you know what outputs you expect according to specification/requirements so why not automate this?

The majority of ‘testing’ I see happening is validation and even though it has some value I would not count validation as testing since it does not tell you anything new about the system you are testing.

It should be noted that to interpret the results from validation ‘testing’/checking requires human interaction to work out if what happens what the correct expected response.

Verification:

When I look at the term verification I use it for when we are verifying any bugs that have been previously found. Someone has made a change to the product and I want to verify that the change made has fixed the problem I had seen before. Some verification tests can be automated – for example if you have run a test previously and found the problem you may be able to automate the steps you followed from testing so that you do not need to run the same test again.

Testing:

I see testing as a thinking exercise in which you need a person to use their skills (and brain) to ask questions of the system being tested. From asking this question they learn more about the system and its behaviour. They will not know the answer to the question but by investigating and tinkering with the product they can form some reasonable answer to the question they posed.When testing we act like crime investigators – you suspect foul play, but you need to ask questions and gather evidence to back up your theories and provide answers to your questions. Testing is not based upon requirements or specification but rather when the specification and requirements are not saying.

Testing is about asking the

  • What
  • Why
  • How

Nassim Nicolas Taleb came up with the following interesting quote:

We are better at doing than learning. Our capacity for knowledge is vastly inferior to our capacity for doing things – our ability to tinker, play, discover by accident. http://www.fooledbyrandomness.com/notebook.htm

So after all of this is there really a problem?

Some of the problems I see within the software testing industry are:

  • We spend far too much validating rather than testing
  • We repeat the same validations (manually) time and time again
  • Cover less of the system by only repeating the same validations
  • Testing is checking exercise rather than a testing exercise
  • Testers are not being engaging
  • Testers are not being challenged
  • Testers do not need to think
  • People see testing as a boring thing to do
  • Testers if used to manually validate are seen as robots

What can be done to improve this?

  • Look to automate the validation (checking stuff)
  • Improve coverage by changing the data sets used in validation
  • Start to use exploratory testing approach – attend a rapid software testing course
  • Look at using Session based testing
  • THINK engage your mind and question the system.

We need to keep learning about testing and do more testing rather than keep repeatedly validating systems, it then becomes a much better and challenging role to be a tester.