People who follow my blog may already know that I have an interest in psychology and how it can impact our thinking in relation to testing. Previously I have written articles on emotions , feelings and confirmation bias. I intend within this article to look at Cognitive Dissonance and how much of a help or a hindrance it can be for a tester.
There have been many studies on Cognitive Dissonance and if it really does exist however it would be best to describe what it is first. Wikipedia describes in it very simple terms:
Cognitive dissonance is an uncomfortable feeling caused by holding conflicting ideas simultaneously.
(http://en.wikipedia.org/wiki/Cognitive_dissonance)
The history of Cognitive Dissonance is accredited to Aesop’s tale of the Fox and the Grapes where the fox who wants to eat the grapes cannot reach them and as such to remedy the conflict that he really does want the grapes decides that the grapes are either not ripe or too sour, hence the title of this blog.
My thoughts on this conflict of beliefs or opinions and how we adjust to resolve the conflict is something that I feel could benefit testers especially when carrying out exploratory testing.
As testers I think it is important we recognise when we are experiencing Cognitive Dissonance and learn to ask questions before we make a decision to resolve the conflict. The problem with experiencing cognitive dissonance is that it is very easy to change our opinion and belief and suddenly we are drawn into the trap of confirmation bias.
As humans we do not like the feeling of conflict within our minds and we try to make a decision to resolve this conflict. Once we have made that decision we will try to justify the reasoning for making the choice we did.
For example if you had two areas to test and each area had the same level of important and you choose area A instead of B. You will now subconsciously favour choice A more than choice B. Now if someone comes up to you and says Area B is more important you have a conflict and your mind needs to give reasons as to why you choose A. You may make statements such as well at least Area A has been tested, it was important for me to test it. You will try to justify the reasoning for making what appears in your mind is the wrong choice.
When testing if we get two conflicting oracles and you need to make a decision on what is correct or the product owner needs to make a decision, cognitive dissonance could come into play. A decision could be made which is wrong. Only the problem now comes at a later date when you need to justify why you made that decision you will change your opinion or belief to make sure that the decision you made was the correct one.
It can become worse if you use a rating system to rate identical items. Imagine you are in a team and the team is given the task to rate features to be tested or developed. These ratings are then used to determine the order in which features for the product are developed. You then make a choice to test/develop your most highly rated feature, at a later date you get to rate the list again the item that you rated as next best has suddenly become of low importance even though it has the same value as the previous highly rated item. Making a choice after rating affects your value/belief/opinion of the item of same identical value and you start to score it lower to stop your uneasy feeling or cognitive dissonance. Project manager and team leads need to be aware of this since something of high importance could be downgraded by a team because of the issues of cognitive dissonance.
So how else can this affect testing?
I believe it can actually harm what we are testing. If the belief we have for the software we are testing in what it should be doing contradicts what it is doing and we adjust the conflict to justify what the software is doing there is a danger we could miss an important bug.
Further reading:
http://tip.psychology.org/festinge.html
http://www.colorado.edu/communication/meta-discourses/Theory/dissonance/
http://arstechnica.com/science/news/2010/12/this-is-your-brain-undergoing-cognitive-dissonance.ars
___________________________________________________________________________________
This is going to be my last post before the holiday season so I would like to say to all readers old and new have a happy holiday and look forward to writing again in the New Year.
Tuesday, 14 December 2010
Thursday, 9 December 2010
Sorting the chaff from the wheat
One of the questions posed at Eurostar 2010 hot topics panel session (link) was:
What’s the most important skill for a tester?
Michael Bolton (Link) gave a witty reply of
Recognizing there is no most important skill.
Whilst I agree with Michael that testers need a wide variety of skills it got me thinking about skills that all testers need or should have. One of these is being able to deal with the vast amount of data that everyone has to deal with. How do we deal with?
I have recently been re-reading the classic HG Wells story “The Time Machine (Link) and started to think about the two communities with the story, the Eloi and the Morlocks and how society in general is getting so much information that it is starting to make us dumb, this IMO is a dangerous thing for testers.
Are we as a society becoming like the Eloi?
They have access to amazing technology that helps with all their needs however they come across in the book as dumb and lack curiosity. They see no need to be thinkers or philosophers. It appears technology is making things easier and easier for us.
When I was young (in the very old days) to find anything out I used to have to read a book. Since books were an expensive item I used to have a list of books I wanted for my birthday or Christmas and I used to visit the local library every week to sit and read and learn new fantastic things. I would get lost in a world of fantasy and knowledge; even then I had a thirst for learning which fortunately has never left me. It is such a shame that local libraries all over the world are shutting due to technology. (Do a Google search for News and Libraries and Closing)
Now information is available at the click of a button. We can find information on how an airplane works or the theory of relativity in an instance. Technology has made all this information easier to get, however IMO it has made us think less.
Do we still question all this information?
As testers we know we should be questioning everything, we learn to sort the chaff from the wheat as the title of this blog post implies. (Link). With such a wealth of information that is so easy to gather the skill is to be able to collate this information and remove the distractions. How easy as testers do we find this? I find it a natural thing I appear to do without thinking until I started to write this blog article. My concern is that with so much information do I end up throwing away something that is later proves to be vital or important.
Does anyone out there in the testing community have a method they use to help with this?
How do we not forget everything?
This then leads on to the topic of self learning – how do you select what to read and what not to read? How do you ensure you do not miss a really important article that has been blogged?
One approach I use for self learning is to use twitter and the software testing club (Link) within these communities’ people we talk about blogs they have read or recommend to read, the power of the crowd. This helps to reduce the amount of information I have to process. Another approach is to actually talk to people; humans are wired to be better at absorbing information via speech than from the written word, it is more likely to be remembered.
My other concern with all this information is our ability to remain focused, another important testing skill. With so much information to digest it is so easy to get into the habit of just scanning the information and not reading the whole article (I wonder how many people will get to this part of the article?). It is so easy to just start an article and get distracted by some other piece of information and not return to the original article. I sometimes think I should not add any hyperlinks to my articles and just add them to the end but I want to credit the people who inspire me or provide me with information as I write the article, it is one of those things which is important to me. A perfect example of this was the recently article Michael Bolton wrote about estimation (Link) which was a five part article, how many people read the whole of the article? It is so easy to skip or scan and miss an important point within an article and the same can be applied to testing. If we scan and miss something it could be that the thing we missed will cost us a lot of money.
My other concern is that we are becoming a society of 24x7 learners, we never switch off.
Are you one of these people?
My concern on this came from a conversation in which I stated I do hobby as a job, this scared me. I have a passion for testing and learning but am I not in danger of burning myself out or forgetting valuable knowledge unless I switch off?
How many others reading this blog switch off and pursue other interests outside of technology?
If you take nothing from this article please do switch off. I have hobbies that have nothing to do with computers. I enjoy being creative and take photographs, spending time outside at stupid o’clock catching sunsets and sunrises and landscapes. I enjoy growing things and spend time in my garden. I am fortunate to have a very large garden in which to grow and nurture things. I also have a family and I am a grand parent and spending time with my granddaughter is such a wonderful thing to do. We call her our little time waster – since time can go so quickly when you are engaged in playing. After all these hobbies it is surprising that I have time to do my job or to keep learning but I come back to my work more energized and ready to learn more.
Since starting to write this article I have found a couple of other blogs that mention the problems of attention span and remaining focused they can be found here:
http://cultureandcommunication.org/f09/tdm/sara-hardwick/attention-span-in-the-internet-age-information-overload-memory-and-teal-deers/
http://www.theatlantic.com/magazine/archive/2008/07/is-google-making-us-stupid/6868/
I do recommend people reading them.
PS I will leave it to your imagination who I think the Morlocks are :o)
What’s the most important skill for a tester?
Michael Bolton (Link) gave a witty reply of
Recognizing there is no most important skill.
Whilst I agree with Michael that testers need a wide variety of skills it got me thinking about skills that all testers need or should have. One of these is being able to deal with the vast amount of data that everyone has to deal with. How do we deal with?
I have recently been re-reading the classic HG Wells story “The Time Machine (Link) and started to think about the two communities with the story, the Eloi and the Morlocks and how society in general is getting so much information that it is starting to make us dumb, this IMO is a dangerous thing for testers.
Are we as a society becoming like the Eloi?
They have access to amazing technology that helps with all their needs however they come across in the book as dumb and lack curiosity. They see no need to be thinkers or philosophers. It appears technology is making things easier and easier for us.
When I was young (in the very old days) to find anything out I used to have to read a book. Since books were an expensive item I used to have a list of books I wanted for my birthday or Christmas and I used to visit the local library every week to sit and read and learn new fantastic things. I would get lost in a world of fantasy and knowledge; even then I had a thirst for learning which fortunately has never left me. It is such a shame that local libraries all over the world are shutting due to technology. (Do a Google search for News and Libraries and Closing)
Now information is available at the click of a button. We can find information on how an airplane works or the theory of relativity in an instance. Technology has made all this information easier to get, however IMO it has made us think less.
Do we still question all this information?
As testers we know we should be questioning everything, we learn to sort the chaff from the wheat as the title of this blog post implies. (Link). With such a wealth of information that is so easy to gather the skill is to be able to collate this information and remove the distractions. How easy as testers do we find this? I find it a natural thing I appear to do without thinking until I started to write this blog article. My concern is that with so much information do I end up throwing away something that is later proves to be vital or important.
Does anyone out there in the testing community have a method they use to help with this?
How do we not forget everything?
This then leads on to the topic of self learning – how do you select what to read and what not to read? How do you ensure you do not miss a really important article that has been blogged?
One approach I use for self learning is to use twitter and the software testing club (Link) within these communities’ people we talk about blogs they have read or recommend to read, the power of the crowd. This helps to reduce the amount of information I have to process. Another approach is to actually talk to people; humans are wired to be better at absorbing information via speech than from the written word, it is more likely to be remembered.
My other concern with all this information is our ability to remain focused, another important testing skill. With so much information to digest it is so easy to get into the habit of just scanning the information and not reading the whole article (I wonder how many people will get to this part of the article?). It is so easy to just start an article and get distracted by some other piece of information and not return to the original article. I sometimes think I should not add any hyperlinks to my articles and just add them to the end but I want to credit the people who inspire me or provide me with information as I write the article, it is one of those things which is important to me. A perfect example of this was the recently article Michael Bolton wrote about estimation (Link) which was a five part article, how many people read the whole of the article? It is so easy to skip or scan and miss an important point within an article and the same can be applied to testing. If we scan and miss something it could be that the thing we missed will cost us a lot of money.
My other concern is that we are becoming a society of 24x7 learners, we never switch off.
Are you one of these people?
My concern on this came from a conversation in which I stated I do hobby as a job, this scared me. I have a passion for testing and learning but am I not in danger of burning myself out or forgetting valuable knowledge unless I switch off?
How many others reading this blog switch off and pursue other interests outside of technology?
If you take nothing from this article please do switch off. I have hobbies that have nothing to do with computers. I enjoy being creative and take photographs, spending time outside at stupid o’clock catching sunsets and sunrises and landscapes. I enjoy growing things and spend time in my garden. I am fortunate to have a very large garden in which to grow and nurture things. I also have a family and I am a grand parent and spending time with my granddaughter is such a wonderful thing to do. We call her our little time waster – since time can go so quickly when you are engaged in playing. After all these hobbies it is surprising that I have time to do my job or to keep learning but I come back to my work more energized and ready to learn more.
Since starting to write this article I have found a couple of other blogs that mention the problems of attention span and remaining focused they can be found here:
http://cultureandcommunication.org/f09/tdm/sara-hardwick/attention-span-in-the-internet-age-information-overload-memory-and-teal-deers/
http://www.theatlantic.com/magazine/archive/2008/07/is-google-making-us-stupid/6868/
I do recommend people reading them.
PS I will leave it to your imagination who I think the Morlocks are :o)
Friday, 3 December 2010
The Human Element
I attended the Eurostar Testing Conference (http://www.eurostarconferences.com/conferences/2010/) in Copenhagen, Denmark this year and met a large group of very interesting people. A few highlights for me were:
Meeting the Cartoon tester (http://cartoontester.blogspot.com/) in person, a friendly unassuming guy with a quick sense of humour.
The other highlight was amount of ‘real life’ examples of exploratory testing and session based testing management. One of the best things I took away was from Carsten Feilberg’s talk on Session-Based Testing in Practice (http://carstenfeilberg.blogspot.com/) in which he reframed the wording of SBTM to Managing Testing Based upon Sessions. It was a why did we not think of that before!!!!
One of the keynotes was by Stewart Reid on “When Passion Obscures The Facts: The Case for Evidence-Based Testing” in which he looked at what testing could learn from Evidence Based Medicine (http://en.wikipedia.org/wiki/Evidence-based_medicine) . During the presentation I thought I could see many flaws in the argument he was trying to put together but could not quite work out what it was. One thing I have found out since and one point that Stewart did appear to miss was the work of the GRADE Working Group which is a newer system (and appearing to gain ground). The principles here are based upon Extrapolations (http://en.wikipedia.org/wiki/Extrapolation).
To quote from Wikipedia:
Extrapolations" are where data is used in a situation which has potentially clinically important differences than the original study situation. Thus, the quality of evidence to support a clinical decision is a combination of the quality of research data and the clinical 'directness' of the data.
Interestingly the data gather for extrapolation are more based upon human experience rather than just a set of numbers. Is it just me or is this like running a set of known tests then exploring afterwards? See my previous post on Hybrid testing (http://steveo1967.blogspot.com/2010/09/hybrid-testing.html)
So why have I called the title of this blog post “The Human Element”?
I was having a conversation with my wife(Tracy) after the conference since she is a retired theatre nurse and understand the medical arena very well and she came up with a wonderful phrase. It is all well and good having all these numbers and statistics but you cannot ignore the human element. She gave an example of this in which a nurse working in Intensive Care has a lot of machinery (with installed software) at her disposal however none of this equipment can tell her if the patient is feeling happy or sad or is uncomfortable.
Tracy said the problem is no machines have a soul they do not care how the patient is feeling, the machine could be saying everything is ok but the nurse and their compassion knows and understands how the patient is. I asked my wife to have a talk with Stewart and some other testers including Lynn Mckee (http://www.qualityperspectives.ca/)
This provided a wonderful insight to me in that we as testers forget that there are lots of people who we should be using as oracles for when we test a system we should not be forgetting about the human element.
During her conversation Stewart started to mention the use of statistics as evidence and for making healthcare decisions (Cochrane Library - http://www.thecochranelibrary.com/view/0/index.html?s_cid=citation) and Tracy said that to get to the point of making a decision still requires the GP to ask questions and to explore all possibilities. At the end of the day it is just statistics said Tracy and it does not help in a situation in which a perfectly healthy 20 year old is prescribed a drug for a problem and then dies due to an undetected heart problem. No amount of journals, evidence can account for this, since it is on a personal level between the patient and the medical expert.
The final conversation I remember Tracy having was with Lynn and a few other and it is very useful for testers when they come up against the problem of ‘It should do this.’
Tracy talked about Dr Spock and the book about the development of children (http://en.wikipedia.org/wiki/Benjamin_Spock) and that at certain age’s children should be doing this and that. This book causes major worries in parents when their child does not meet the timescales within the book for talking, sitting up, walking etc. Tracy then made a point which caused a great amount of laughter. “People seem to forget that babies have not read the book – they will develop at their own pace”
I found this a wonderful piece of insight, we seem to forget that everyone is different and if we apply this to software and the development of software we start to realise that every piece of software is different and that we need to explore the software and play with it to get the full potential out of the software.
To conclude this post I would like to say a big thank you to my wife Tracy for her encouragement and support in what I do and for giving us testers a lesson in remembering about the human element in what we do.
Meeting the Cartoon tester (http://cartoontester.blogspot.com/) in person, a friendly unassuming guy with a quick sense of humour.
The other highlight was amount of ‘real life’ examples of exploratory testing and session based testing management. One of the best things I took away was from Carsten Feilberg’s talk on Session-Based Testing in Practice (http://carstenfeilberg.blogspot.com/) in which he reframed the wording of SBTM to Managing Testing Based upon Sessions. It was a why did we not think of that before!!!!
One of the keynotes was by Stewart Reid on “When Passion Obscures The Facts: The Case for Evidence-Based Testing” in which he looked at what testing could learn from Evidence Based Medicine (http://en.wikipedia.org/wiki/Evidence-based_medicine) . During the presentation I thought I could see many flaws in the argument he was trying to put together but could not quite work out what it was. One thing I have found out since and one point that Stewart did appear to miss was the work of the GRADE Working Group which is a newer system (and appearing to gain ground). The principles here are based upon Extrapolations (http://en.wikipedia.org/wiki/Extrapolation).
To quote from Wikipedia:
Extrapolations" are where data is used in a situation which has potentially clinically important differences than the original study situation. Thus, the quality of evidence to support a clinical decision is a combination of the quality of research data and the clinical 'directness' of the data.
Interestingly the data gather for extrapolation are more based upon human experience rather than just a set of numbers. Is it just me or is this like running a set of known tests then exploring afterwards? See my previous post on Hybrid testing (http://steveo1967.blogspot.com/2010/09/hybrid-testing.html)
So why have I called the title of this blog post “The Human Element”?
I was having a conversation with my wife(Tracy) after the conference since she is a retired theatre nurse and understand the medical arena very well and she came up with a wonderful phrase. It is all well and good having all these numbers and statistics but you cannot ignore the human element. She gave an example of this in which a nurse working in Intensive Care has a lot of machinery (with installed software) at her disposal however none of this equipment can tell her if the patient is feeling happy or sad or is uncomfortable.
Tracy said the problem is no machines have a soul they do not care how the patient is feeling, the machine could be saying everything is ok but the nurse and their compassion knows and understands how the patient is. I asked my wife to have a talk with Stewart and some other testers including Lynn Mckee (http://www.qualityperspectives.ca/)
This provided a wonderful insight to me in that we as testers forget that there are lots of people who we should be using as oracles for when we test a system we should not be forgetting about the human element.
During her conversation Stewart started to mention the use of statistics as evidence and for making healthcare decisions (Cochrane Library - http://www.thecochranelibrary.com/view/0/index.html?s_cid=citation) and Tracy said that to get to the point of making a decision still requires the GP to ask questions and to explore all possibilities. At the end of the day it is just statistics said Tracy and it does not help in a situation in which a perfectly healthy 20 year old is prescribed a drug for a problem and then dies due to an undetected heart problem. No amount of journals, evidence can account for this, since it is on a personal level between the patient and the medical expert.
The final conversation I remember Tracy having was with Lynn and a few other and it is very useful for testers when they come up against the problem of ‘It should do this.’
Tracy talked about Dr Spock and the book about the development of children (http://en.wikipedia.org/wiki/Benjamin_Spock) and that at certain age’s children should be doing this and that. This book causes major worries in parents when their child does not meet the timescales within the book for talking, sitting up, walking etc. Tracy then made a point which caused a great amount of laughter. “People seem to forget that babies have not read the book – they will develop at their own pace”
I found this a wonderful piece of insight, we seem to forget that everyone is different and if we apply this to software and the development of software we start to realise that every piece of software is different and that we need to explore the software and play with it to get the full potential out of the software.
To conclude this post I would like to say a big thank you to my wife Tracy for her encouragement and support in what I do and for giving us testers a lesson in remembering about the human element in what we do.
Tuesday, 9 November 2010
STC: A tester is for life, not just for Christmas
I have completed my form have you?
The Software Testing Club will be creating an ebook to help raise money for Oxfam this Christmas.
It needs your input as a tester – so if you have not yet completed the form go to the website and do so.
You can help by filling in this form, promoting the book and/or donating.
Then you can make a donation to OXFAM and promote this worthwhile cause by any communication means you have at your disposal.
For example
twitter,
your blog
email,
telephone,
over drinks at the bar,
Whilst whispering sweet nothings to your sweetheart.
(oops strike the last one – that might get you in trouble)
but I hope you get the message the more people that know about this the better.
The Software Testing Club will be creating an ebook to help raise money for Oxfam this Christmas.
It needs your input as a tester – so if you have not yet completed the form go to the website and do so.
You can help by filling in this form, promoting the book and/or donating.
Then you can make a donation to OXFAM and promote this worthwhile cause by any communication means you have at your disposal.
For example
twitter,
your blog
email,
telephone,
over drinks at the bar,
Whilst whispering sweet nothings to your sweetheart.
(oops strike the last one – that might get you in trouble)
but I hope you get the message the more people that know about this the better.
Tuesday, 2 November 2010
Exploring the World
I have noticed that I have been a little remiss with my blog recently, this has been due to a combination of different things such as workload, home life and not having a great amount to say, which is fairly unusual for me. I don’t want to blog for the sake of blogging I want to blog when I feel I have something to say about the testing world.
I will soon be on my travels again to talk about exploratory testing and testing skills this time in Israel as part of an internal company workshop. I find it interesting that again I will adjust my material to match my audience on a cultural level, see my previous blog about training in India (http://steveo1967.blogspot.com/2010/06/training-in-india.html).
I wonder how many of us do this and how many of us just keep the same material and just recycle it regards of the audience?
This brings me to the point of this blog, if we treat software as different cultures and we try to explore and communicate with these cultures in the exact same way each time without making adjustments for the cultural differences.
Are we going to get to know anything about this culture?
What will we learn?
Will this culture give us any useful information back?
If we compare this the approach I use when presenting you can see that I learn about the culture. I am exploring by communicating with it and finding out about all the subtle differences there are. I try to avoid the traps and fopars that can cause offence by being ignorant of the culture. I consult oracles that have knowledge of the culture; I use heuristics when presenting to test the reaction of the audience.
Does it respond well to what I am saying?
Is it losing interest?
I then adapt my presentation on the fly to try to re-engage with the audience.
I am using the exploratory approach when presenting and I do this naturally. I very much doubt that most people who read this blog do not change their material, communication methods and approach when working with different cultures.
So why do we as a testing profession still insist that we can test software with scripts that do not change or adapt to the slight differences in culture? Yes there is an argument that says something’s do not change regardless of the culture and that is true. However if you go to a different culture and you are ignorant about their values and beliefs and you are unwilling to learn then you will leave that culture none the wise or richer in experience and understanding.
Do not be ignorant about exploring software, yes you can use the same techniques and methods you have gathered over the years to explore the software but do not fall in to the trap of following things by rote.
Hopefully there will be a few events coming up soon in which I can get some more topics to blog about.
If anyone is interested I will be at Eurostar (http://www.eurostarconferences.com/) this year in Copenhagen Denmark and it would be nice to meet up with like minded people and hopefully have some great discussions over beer of course.
I will soon be on my travels again to talk about exploratory testing and testing skills this time in Israel as part of an internal company workshop. I find it interesting that again I will adjust my material to match my audience on a cultural level, see my previous blog about training in India (http://steveo1967.blogspot.com/2010/06/training-in-india.html).
I wonder how many of us do this and how many of us just keep the same material and just recycle it regards of the audience?
This brings me to the point of this blog, if we treat software as different cultures and we try to explore and communicate with these cultures in the exact same way each time without making adjustments for the cultural differences.
Are we going to get to know anything about this culture?
What will we learn?
Will this culture give us any useful information back?
If we compare this the approach I use when presenting you can see that I learn about the culture. I am exploring by communicating with it and finding out about all the subtle differences there are. I try to avoid the traps and fopars that can cause offence by being ignorant of the culture. I consult oracles that have knowledge of the culture; I use heuristics when presenting to test the reaction of the audience.
Does it respond well to what I am saying?
Is it losing interest?
I then adapt my presentation on the fly to try to re-engage with the audience.
I am using the exploratory approach when presenting and I do this naturally. I very much doubt that most people who read this blog do not change their material, communication methods and approach when working with different cultures.
So why do we as a testing profession still insist that we can test software with scripts that do not change or adapt to the slight differences in culture? Yes there is an argument that says something’s do not change regardless of the culture and that is true. However if you go to a different culture and you are ignorant about their values and beliefs and you are unwilling to learn then you will leave that culture none the wise or richer in experience and understanding.
Do not be ignorant about exploring software, yes you can use the same techniques and methods you have gathered over the years to explore the software but do not fall in to the trap of following things by rote.
Hopefully there will be a few events coming up soon in which I can get some more topics to blog about.
If anyone is interested I will be at Eurostar (http://www.eurostarconferences.com/) this year in Copenhagen Denmark and it would be nice to meet up with like minded people and hopefully have some great discussions over beer of course.
Monday, 27 September 2010
Hybrid Testing
There has been a lot of talk within the testing community about the scripted v non-scripted approach to testing. I have read and heard from people aligned to each school of thought trying to debunk the other schools approach. This can be very confusing to those that work in the profession of software testing. There are arguments on either side with people on either side presenting their point of view. I thought I would blog about my experiences of using both approaches in my day to day testing job.
When I first started in testing I worked for many companies which had adopted the Prince 2 methodology of software development and loosely followed a V-model process. This meant that requirements and specifications were gathered before any development work started. Using these documents as a tester I would do some gap analysis from a testing perspective to see where requirements contradicted each other and design specifications did not meet requirements. These were very heavyweight documents and it was a laborious task that engaged me as a tester to a certain point. Using these documents I would start to create scripted tests and build up a repository of test suites. Once the software started to be developed and I could gain access to certain features my engagement as a tester increased. I would run through my scripted tests and find that a large amount of them would need altering since I had made the wrong assumptions or the requirements and specification did not meet what was delivered. As I ‘explored’ the software I found more and more test ideas which would become test cases. The amount of discussions I had with senior management on why the number of test cases was increasing is another story altogether. I would spend a large amount of time adding detailed steps to the test scripts and then when we had another drop of the software run them again as a regression pack. I tried to automate the tests which for some easy parts worked and for others did not. What I did not know at the time was I was carrying out exploratory testing without knowing I was doing so. Once I had the software it was the most engaging time as a tester, it was what made me feel like I had done a good job by the end of the day.
So let us jump forward to today: - we have TDD, agile and a multitude of different approaches to software development. It is all about being flexible and developing software the customer needs quickly and efficiently and being able to adapt quickly when customer needs change. As testers we get to see and explore the software a lot sooner.
A lot has changed from a tester perspective we are now engaged more in the whole process, we are expected to have some knowledge of coding, IMO not always necessary but a good tool to have. We get to see the software a lot sooner and able to exercise and explore the software and to engage our testing minds to what the software should, could or may do. However have things changed that radically?
What has made me think about writing this blog has been the debates that have been going on about scripted vs. non-scripted. I am currently working on a new project in which there are many dependencies on internal components and external 3rd parties all of which are working to different timescales. Some of the components can be simulated which others cannot due to time constraints and other technical problems. We have some pretty good requirement documents and some design specifications. What we do not have at the moment is fully working end to end software. So I am back creating scripted test cases to meet the requirements, finding discrepancies in the documents and asking questions. The difference is that now I do not fully step out my scripts I create pointers on how to test the feature, I note test ideas that could be interesting to look at when the software arrives, I make a note of any dependencies that the software would require before testing that feature. So I create a story about testing the feature rather than create a step by step set of instructions. It is more a testing checklist rather than a test script. So with this I am combining both scripted and the non-scripted approach. I am sure a lot of readers will read this and think that they are doing the same.
The people who talk about exploratory testing have never said to my recollection that there is no need for scripted tests. Some of the requirements I have are fixed they will not change; the results should be as stated; so those requirements I can script or automate if possible. It does not mean you are doing anything wrong nor does it mean that you are not following an exploratory approach. Exploratory testing is just that an approach, it is not a method, it is not a do this and nothing else. It is a tool to use to enable you to test and hopefully engage you in testing rather than just being a checking robot. If you still create detailed step by step scripts then there is nothing wrong in doing that, I still do when required.
Exploratory testing can be used without the software, you can look at available documents and explore them for test ideas and new creative ways to test what the documents are stating, you can question, you can analysis you can make suggestions and improvements, you can use your brain.
When I first started in testing I worked for many companies which had adopted the Prince 2 methodology of software development and loosely followed a V-model process. This meant that requirements and specifications were gathered before any development work started. Using these documents as a tester I would do some gap analysis from a testing perspective to see where requirements contradicted each other and design specifications did not meet requirements. These were very heavyweight documents and it was a laborious task that engaged me as a tester to a certain point. Using these documents I would start to create scripted tests and build up a repository of test suites. Once the software started to be developed and I could gain access to certain features my engagement as a tester increased. I would run through my scripted tests and find that a large amount of them would need altering since I had made the wrong assumptions or the requirements and specification did not meet what was delivered. As I ‘explored’ the software I found more and more test ideas which would become test cases. The amount of discussions I had with senior management on why the number of test cases was increasing is another story altogether. I would spend a large amount of time adding detailed steps to the test scripts and then when we had another drop of the software run them again as a regression pack. I tried to automate the tests which for some easy parts worked and for others did not. What I did not know at the time was I was carrying out exploratory testing without knowing I was doing so. Once I had the software it was the most engaging time as a tester, it was what made me feel like I had done a good job by the end of the day.
So let us jump forward to today: - we have TDD, agile and a multitude of different approaches to software development. It is all about being flexible and developing software the customer needs quickly and efficiently and being able to adapt quickly when customer needs change. As testers we get to see and explore the software a lot sooner.
A lot has changed from a tester perspective we are now engaged more in the whole process, we are expected to have some knowledge of coding, IMO not always necessary but a good tool to have. We get to see the software a lot sooner and able to exercise and explore the software and to engage our testing minds to what the software should, could or may do. However have things changed that radically?
What has made me think about writing this blog has been the debates that have been going on about scripted vs. non-scripted. I am currently working on a new project in which there are many dependencies on internal components and external 3rd parties all of which are working to different timescales. Some of the components can be simulated which others cannot due to time constraints and other technical problems. We have some pretty good requirement documents and some design specifications. What we do not have at the moment is fully working end to end software. So I am back creating scripted test cases to meet the requirements, finding discrepancies in the documents and asking questions. The difference is that now I do not fully step out my scripts I create pointers on how to test the feature, I note test ideas that could be interesting to look at when the software arrives, I make a note of any dependencies that the software would require before testing that feature. So I create a story about testing the feature rather than create a step by step set of instructions. It is more a testing checklist rather than a test script. So with this I am combining both scripted and the non-scripted approach. I am sure a lot of readers will read this and think that they are doing the same.
The people who talk about exploratory testing have never said to my recollection that there is no need for scripted tests. Some of the requirements I have are fixed they will not change; the results should be as stated; so those requirements I can script or automate if possible. It does not mean you are doing anything wrong nor does it mean that you are not following an exploratory approach. Exploratory testing is just that an approach, it is not a method, it is not a do this and nothing else. It is a tool to use to enable you to test and hopefully engage you in testing rather than just being a checking robot. If you still create detailed step by step scripts then there is nothing wrong in doing that, I still do when required.
Exploratory testing can be used without the software, you can look at available documents and explore them for test ideas and new creative ways to test what the documents are stating, you can question, you can analysis you can make suggestions and improvements, you can use your brain.
Wednesday, 8 September 2010
We test the system and gather
I have noticed that it has been awhile since I did a blog due to family commitments and vacation time. I had an idea to blog about the importance of gathering evidence when testing especially when using an exploratory testing approach. I decided to take the example of why it is importance from an internal workshop that I run on exploratory testing.
So are you sitting comfortably?
Here is the story of Timmy the Tester and Elisa the Explorer
Timmy Tester
Timmy Tester has been given a new build to test and decides he is going to test it using the exploratory approach. Timmy writes that his mission statement is to test function x has been implemented.
He installs the release and starts to enter different values and pressing buttons around the new feature at random. He does this for about half an hour and then all of a sudden the application crashes.
Timmy takes a screen shot of the crash and a system dump of the crash and goes to talk to the developer. The first question the developer asks is
“Have you tried to reproduce the problem?”
At this point Timmy says no and goes back to try to reproduce the problem.
Two days later Timmy has been unable to reproduce the problem and now thinks it could have been one of those strange things.
3 months later the application is now live on the customer site. Within 30 minutes there are a large number of calls to support stating the application is crashing. The problem gets passed back to Timmy who notices that the problem appears to be the same as the one they saw when carrying out ‘exploratory’ testing….
Elisa the Explorer
Elisa has been given a new build to test and decides she is going to test it using the exploratory approach. Elisa creates a mission statement stating they are going to be testing the new function.
Elisa installs the new application and starts to enter different values around the new feature. As she is doing this Elisa has another computer in which she makes notes and takes screenshots at relevant points to aid clarity of each step that she has carried out. At certain points Elisa finds behaviour of the system which does not seem correct so she starts another mission statement to look into that behaviour. Elisa then starts to examine the strange behaviour in more detail making notes of the steps she is carrying out at the same time. All of a sudden when pressing a button the application crashes.
Elisa makes some notes, takes a screen shot and a system dump of the crash.
Elisa then resets the application back to a clean system and repeats the last set of steps which she had made a note of. The crash happens again.
Elisa then goes to see the developer and states she has managed to produce the problem more than once and here are the steps.
Elisa sits with the developer while they go through the steps together and the developer sees the crash.
Two days later Elisa has been given a fix for the crash. She now has an automated test for the crash and runs it straight away. The test passes and Elisa continues with the rest of her testing.
It may seem like common sense but I have seen more cases of Timmy than Elisa from people who have said they are using exploratory testing. It is extremely important to record everything and remember exploratory testing does not remove any of the principles of testing:
“All tests are repeatable”
“All problems are reproducible.”
There are many ways we can gather evidence of our testing sessions and there are a large amount of tools available to the exploratory tester. In much the same way that when the first humans decided to explore the North Pole they took tools with them that could help and support their efforts exploratory testers can do the same when exploring the system under test.
Maybe I should look at some of these tools and write a blog about them – or even better people who read this blog might be able to suggest some good tools that they have had experience of using.
So are you sitting comfortably?
Here is the story of Timmy the Tester and Elisa the Explorer
Timmy Tester
Timmy Tester has been given a new build to test and decides he is going to test it using the exploratory approach. Timmy writes that his mission statement is to test function x has been implemented.
He installs the release and starts to enter different values and pressing buttons around the new feature at random. He does this for about half an hour and then all of a sudden the application crashes.
Timmy takes a screen shot of the crash and a system dump of the crash and goes to talk to the developer. The first question the developer asks is
“Have you tried to reproduce the problem?”
At this point Timmy says no and goes back to try to reproduce the problem.
Two days later Timmy has been unable to reproduce the problem and now thinks it could have been one of those strange things.
3 months later the application is now live on the customer site. Within 30 minutes there are a large number of calls to support stating the application is crashing. The problem gets passed back to Timmy who notices that the problem appears to be the same as the one they saw when carrying out ‘exploratory’ testing….
Elisa the Explorer
Elisa has been given a new build to test and decides she is going to test it using the exploratory approach. Elisa creates a mission statement stating they are going to be testing the new function.
Elisa installs the new application and starts to enter different values around the new feature. As she is doing this Elisa has another computer in which she makes notes and takes screenshots at relevant points to aid clarity of each step that she has carried out. At certain points Elisa finds behaviour of the system which does not seem correct so she starts another mission statement to look into that behaviour. Elisa then starts to examine the strange behaviour in more detail making notes of the steps she is carrying out at the same time. All of a sudden when pressing a button the application crashes.
Elisa makes some notes, takes a screen shot and a system dump of the crash.
Elisa then resets the application back to a clean system and repeats the last set of steps which she had made a note of. The crash happens again.
Elisa then goes to see the developer and states she has managed to produce the problem more than once and here are the steps.
Elisa sits with the developer while they go through the steps together and the developer sees the crash.
Two days later Elisa has been given a fix for the crash. She now has an automated test for the crash and runs it straight away. The test passes and Elisa continues with the rest of her testing.
It may seem like common sense but I have seen more cases of Timmy than Elisa from people who have said they are using exploratory testing. It is extremely important to record everything and remember exploratory testing does not remove any of the principles of testing:
“All tests are repeatable”
“All problems are reproducible.”
There are many ways we can gather evidence of our testing sessions and there are a large amount of tools available to the exploratory tester. In much the same way that when the first humans decided to explore the North Pole they took tools with them that could help and support their efforts exploratory testers can do the same when exploring the system under test.
Maybe I should look at some of these tools and write a blog about them – or even better people who read this blog might be able to suggest some good tools that they have had experience of using.
Subscribe to:
Posts (Atom)