There has been a lot of talk within the testing community about the scripted v non-scripted approach to testing. I have read and heard from people aligned to each school of thought trying to debunk the other schools approach. This can be very confusing to those that work in the profession of software testing. There are arguments on either side with people on either side presenting their point of view. I thought I would blog about my experiences of using both approaches in my day to day testing job.
When I first started in testing I worked for many companies which had adopted the Prince 2 methodology of software development and loosely followed a V-model process. This meant that requirements and specifications were gathered before any development work started. Using these documents as a tester I would do some gap analysis from a testing perspective to see where requirements contradicted each other and design specifications did not meet requirements. These were very heavyweight documents and it was a laborious task that engaged me as a tester to a certain point. Using these documents I would start to create scripted tests and build up a repository of test suites. Once the software started to be developed and I could gain access to certain features my engagement as a tester increased. I would run through my scripted tests and find that a large amount of them would need altering since I had made the wrong assumptions or the requirements and specification did not meet what was delivered. As I ‘explored’ the software I found more and more test ideas which would become test cases. The amount of discussions I had with senior management on why the number of test cases was increasing is another story altogether. I would spend a large amount of time adding detailed steps to the test scripts and then when we had another drop of the software run them again as a regression pack. I tried to automate the tests which for some easy parts worked and for others did not. What I did not know at the time was I was carrying out exploratory testing without knowing I was doing so. Once I had the software it was the most engaging time as a tester, it was what made me feel like I had done a good job by the end of the day.
So let us jump forward to today: - we have TDD, agile and a multitude of different approaches to software development. It is all about being flexible and developing software the customer needs quickly and efficiently and being able to adapt quickly when customer needs change. As testers we get to see and explore the software a lot sooner.
A lot has changed from a tester perspective we are now engaged more in the whole process, we are expected to have some knowledge of coding, IMO not always necessary but a good tool to have. We get to see the software a lot sooner and able to exercise and explore the software and to engage our testing minds to what the software should, could or may do. However have things changed that radically?
What has made me think about writing this blog has been the debates that have been going on about scripted vs. non-scripted. I am currently working on a new project in which there are many dependencies on internal components and external 3rd parties all of which are working to different timescales. Some of the components can be simulated which others cannot due to time constraints and other technical problems. We have some pretty good requirement documents and some design specifications. What we do not have at the moment is fully working end to end software. So I am back creating scripted test cases to meet the requirements, finding discrepancies in the documents and asking questions. The difference is that now I do not fully step out my scripts I create pointers on how to test the feature, I note test ideas that could be interesting to look at when the software arrives, I make a note of any dependencies that the software would require before testing that feature. So I create a story about testing the feature rather than create a step by step set of instructions. It is more a testing checklist rather than a test script. So with this I am combining both scripted and the non-scripted approach. I am sure a lot of readers will read this and think that they are doing the same.
The people who talk about exploratory testing have never said to my recollection that there is no need for scripted tests. Some of the requirements I have are fixed they will not change; the results should be as stated; so those requirements I can script or automate if possible. It does not mean you are doing anything wrong nor does it mean that you are not following an exploratory approach. Exploratory testing is just that an approach, it is not a method, it is not a do this and nothing else. It is a tool to use to enable you to test and hopefully engage you in testing rather than just being a checking robot. If you still create detailed step by step scripts then there is nothing wrong in doing that, I still do when required.
Exploratory testing can be used without the software, you can look at available documents and explore them for test ideas and new creative ways to test what the documents are stating, you can question, you can analysis you can make suggestions and improvements, you can use your brain.
Monday, 27 September 2010
Wednesday, 8 September 2010
We test the system and gather
I have noticed that it has been awhile since I did a blog due to family commitments and vacation time. I had an idea to blog about the importance of gathering evidence when testing especially when using an exploratory testing approach. I decided to take the example of why it is importance from an internal workshop that I run on exploratory testing.
So are you sitting comfortably?
Here is the story of Timmy the Tester and Elisa the Explorer
Timmy Tester
Timmy Tester has been given a new build to test and decides he is going to test it using the exploratory approach. Timmy writes that his mission statement is to test function x has been implemented.
He installs the release and starts to enter different values and pressing buttons around the new feature at random. He does this for about half an hour and then all of a sudden the application crashes.
Timmy takes a screen shot of the crash and a system dump of the crash and goes to talk to the developer. The first question the developer asks is
“Have you tried to reproduce the problem?”
At this point Timmy says no and goes back to try to reproduce the problem.
Two days later Timmy has been unable to reproduce the problem and now thinks it could have been one of those strange things.
3 months later the application is now live on the customer site. Within 30 minutes there are a large number of calls to support stating the application is crashing. The problem gets passed back to Timmy who notices that the problem appears to be the same as the one they saw when carrying out ‘exploratory’ testing….
Elisa the Explorer
Elisa has been given a new build to test and decides she is going to test it using the exploratory approach. Elisa creates a mission statement stating they are going to be testing the new function.
Elisa installs the new application and starts to enter different values around the new feature. As she is doing this Elisa has another computer in which she makes notes and takes screenshots at relevant points to aid clarity of each step that she has carried out. At certain points Elisa finds behaviour of the system which does not seem correct so she starts another mission statement to look into that behaviour. Elisa then starts to examine the strange behaviour in more detail making notes of the steps she is carrying out at the same time. All of a sudden when pressing a button the application crashes.
Elisa makes some notes, takes a screen shot and a system dump of the crash.
Elisa then resets the application back to a clean system and repeats the last set of steps which she had made a note of. The crash happens again.
Elisa then goes to see the developer and states she has managed to produce the problem more than once and here are the steps.
Elisa sits with the developer while they go through the steps together and the developer sees the crash.
Two days later Elisa has been given a fix for the crash. She now has an automated test for the crash and runs it straight away. The test passes and Elisa continues with the rest of her testing.
It may seem like common sense but I have seen more cases of Timmy than Elisa from people who have said they are using exploratory testing. It is extremely important to record everything and remember exploratory testing does not remove any of the principles of testing:
“All tests are repeatable”
“All problems are reproducible.”
There are many ways we can gather evidence of our testing sessions and there are a large amount of tools available to the exploratory tester. In much the same way that when the first humans decided to explore the North Pole they took tools with them that could help and support their efforts exploratory testers can do the same when exploring the system under test.
Maybe I should look at some of these tools and write a blog about them – or even better people who read this blog might be able to suggest some good tools that they have had experience of using.
So are you sitting comfortably?
Here is the story of Timmy the Tester and Elisa the Explorer
Timmy Tester
Timmy Tester has been given a new build to test and decides he is going to test it using the exploratory approach. Timmy writes that his mission statement is to test function x has been implemented.
He installs the release and starts to enter different values and pressing buttons around the new feature at random. He does this for about half an hour and then all of a sudden the application crashes.
Timmy takes a screen shot of the crash and a system dump of the crash and goes to talk to the developer. The first question the developer asks is
“Have you tried to reproduce the problem?”
At this point Timmy says no and goes back to try to reproduce the problem.
Two days later Timmy has been unable to reproduce the problem and now thinks it could have been one of those strange things.
3 months later the application is now live on the customer site. Within 30 minutes there are a large number of calls to support stating the application is crashing. The problem gets passed back to Timmy who notices that the problem appears to be the same as the one they saw when carrying out ‘exploratory’ testing….
Elisa the Explorer
Elisa has been given a new build to test and decides she is going to test it using the exploratory approach. Elisa creates a mission statement stating they are going to be testing the new function.
Elisa installs the new application and starts to enter different values around the new feature. As she is doing this Elisa has another computer in which she makes notes and takes screenshots at relevant points to aid clarity of each step that she has carried out. At certain points Elisa finds behaviour of the system which does not seem correct so she starts another mission statement to look into that behaviour. Elisa then starts to examine the strange behaviour in more detail making notes of the steps she is carrying out at the same time. All of a sudden when pressing a button the application crashes.
Elisa makes some notes, takes a screen shot and a system dump of the crash.
Elisa then resets the application back to a clean system and repeats the last set of steps which she had made a note of. The crash happens again.
Elisa then goes to see the developer and states she has managed to produce the problem more than once and here are the steps.
Elisa sits with the developer while they go through the steps together and the developer sees the crash.
Two days later Elisa has been given a fix for the crash. She now has an automated test for the crash and runs it straight away. The test passes and Elisa continues with the rest of her testing.
It may seem like common sense but I have seen more cases of Timmy than Elisa from people who have said they are using exploratory testing. It is extremely important to record everything and remember exploratory testing does not remove any of the principles of testing:
“All tests are repeatable”
“All problems are reproducible.”
There are many ways we can gather evidence of our testing sessions and there are a large amount of tools available to the exploratory tester. In much the same way that when the first humans decided to explore the North Pole they took tools with them that could help and support their efforts exploratory testers can do the same when exploring the system under test.
Maybe I should look at some of these tools and write a blog about them – or even better people who read this blog might be able to suggest some good tools that they have had experience of using.
Monday, 2 August 2010
The Certification Filter
I had a few moments to myself the other day and decided to have a bit of fun and do some research about what agencies required when you apply for testing roles. I was surprised (or not) by the number of roles that stated “will be ISEB certificated”
How can they get it so wrong? The ISEB is now defunct it should be ISTQB (sic) so all those who hold an ISTQB qualification need not apply………
Anyway back to the point I am trying to make.
I did some quick research and found that within the last seven days of the 122 testing roles listed I would be excluded from 28 of them since they stated …will be ISEB certified. If you add in the ones that add ‘would prefer candidates with ISEB certification. You get to over half of the roles advertised that I could not apply for.
@Mheusser made the following tweet @steveo1967 - certification might just get you the job you don't want to have.
That maybe be true but I got thinking and maybe it is not the company using the agency that are mandating this but the agency adding its own filter. The company could be missing out on some great testers because of the agency filtering system.
The following are some imaginary conversations with an agency (Sadly I am sure some people may have already experienced this…)
************************************************************************************
Me: Hi I am interested in being put forward for the role of chief tester as per the advert you posted this morning on your website.
Agency: Sure, have you already sent us your CV, Resume?
Me: Yes
Agency: OK, what is your name, let us see if we can find your CV
Me: My name is Timmy Tester
Agency: Wow cool name sure matches your profession
Me: Yes (Sigh) I get that all the time..
Agency: OK Got your details – just looking at them now, wow 20 years in software testing, that is impressive, I see you have worked at some very well known companies. Oh wait a minute…. You do not list that you hold the ISEB qualification.
Me: No I do not.
Agency: I am sorry Timmy I cannot put forward your application – we only accept people who are ISEB qualified.
Me: Why? I have more than 20 years in software testing.
Agency: Yes I can see that, however the ISEB qualification means that you know how to test and is mandatory for any roles you apply for with us. Sorry but I cannot put you forward for this. I suggest you go and sit the exam and get back to us. Goodbye.
************************************************************************************
Me: Hi I am interested in being put forward for the role of chief tester as per the advert you posted this morning on your website.
Agency: Sure, have you already sent us your CV, Resume?
Me: Yes
Agency: OK, what is your name, let us see if we can find your CV
Me: My name is Ivor Certificate
Agency: OK Got your details – just looking at them now, you have been in testing for 8 years now, I see you also hold the ISEB qualification. That means you must know a lot about testing.
Me: No I do not.
Agency: Sorry, you said you do not?
Me: Yes I did say, no I do not know anything about testing
Agency: Then how come you have a ISEB Certification?
Me: Because I noticed that if I had this I could apply for any testing job.
Agency: So you must know something about testing?
Me: Well it is an interesting story. I paid to do the multi choice exam, sat down and completed it in 10 minutes by just randomly answering the questions. No one checks if I have any competence at testing, by luck I managed to get the pass mark required to get the certificate. Hence I am now classed as someone who must know about testing and how to test.
Agency: But it says on your CV that you have been testing for the last 8 years.
Me: Yes I have but I just go in and let others, who are not certified do all the work while I just copy what they have done and claim credit for it. So are you going to put me forward for the role?
Agency: Yes I will you meet the selection criteria, so I cannot see any problems in putting you forward.
Me: Thank You.
************************************************************************************
This may seem like a silly situation but I am sure it could happen in reality. I am not against qualification and people trying to improve themselves but when those qualifications are then used as a filter to exclude people from applying from jobs, it makes me see red.
There are some good examples of proving your ability as a good tester. You can talk to previous companies that you have worked for. They could interview you and talk to you about testing and your thoughts, problem solving ability. However this would take too much time.
I have not as yet found any roles that have stipulated that they require that you attended a Rapid Software Testing Course or The Black Box Software Testing course through the Association for Software Testing. Why is this so? Is it that these types of courses do not have the huge budget to promote themselves? Or that they try to be non profit making?
What is the solution to all of this?
I think within the testing community we need to start educating agencies and companies about how to sort out good from bad testers, how we go about this I am not sure.
Should we have a dedicated website that we can direct agencies to, to explain about certification? I feel this could be a good start and would need someone with far better web skills than myself to get running and also it would need to be unbiased as possible.
Then mail shot the CEOs at each agency when we hit this problem directing them to the website.
Do we try to do presentations at employment agency conferences?
I feel there is a need to educate agencies and companies that are looking to employ testers and give advice on how to spot the good from bad candidates but they need to get rid of the certification filter.
Another thought I had would it be a good idea to have a vetting service for testers and agencies? There could be a one stop service for agencies to verify testers, their abilities and obtain a list of people who would vouch for them.
I tried to think at the weekend if this could work or not. I have a lot of concerns about it being misused and ‘gamed’ by people who have a moral compass that is slightly off balance. How would it be funded? Would it become a monster of its own making? How would testers be vetted and vouched for? Would testers be vetted and vouched based upon their online presence? For example I am sure I could ask a few people online to vouch that I am a good tester however none of these people have worked with me and seen me carry out testing. I could be just saying the right thing at the right time to impress people – how would anyone know unless they have worked with me?
This really brings us back to the beginning of the article, agencies and companies need someway to vet testers and get some guarantee that they know about testing (regardless of which school of thought they follow). So using the certification method is an ideal way to sort out candidates quickly no matter how flawed the certification may be. Do I just bite the bullet and sit the exam if I do not want to be excluded for any testing job? Does anyone have any other methods that agencies or companies can use when they are looking for skilled testers?
How can they get it so wrong? The ISEB is now defunct it should be ISTQB (sic) so all those who hold an ISTQB qualification need not apply………
Anyway back to the point I am trying to make.
I did some quick research and found that within the last seven days of the 122 testing roles listed I would be excluded from 28 of them since they stated …will be ISEB certified. If you add in the ones that add ‘would prefer candidates with ISEB certification. You get to over half of the roles advertised that I could not apply for.
@Mheusser made the following tweet @steveo1967 - certification might just get you the job you don't want to have.
That maybe be true but I got thinking and maybe it is not the company using the agency that are mandating this but the agency adding its own filter. The company could be missing out on some great testers because of the agency filtering system.
The following are some imaginary conversations with an agency (Sadly I am sure some people may have already experienced this…)
************************************************************************************
Me: Hi I am interested in being put forward for the role of chief tester as per the advert you posted this morning on your website.
Agency: Sure, have you already sent us your CV, Resume?
Me: Yes
Agency: OK, what is your name, let us see if we can find your CV
Me: My name is Timmy Tester
Agency: Wow cool name sure matches your profession
Me: Yes (Sigh) I get that all the time..
Agency: OK Got your details – just looking at them now, wow 20 years in software testing, that is impressive, I see you have worked at some very well known companies. Oh wait a minute…. You do not list that you hold the ISEB qualification.
Me: No I do not.
Agency: I am sorry Timmy I cannot put forward your application – we only accept people who are ISEB qualified.
Me: Why? I have more than 20 years in software testing.
Agency: Yes I can see that, however the ISEB qualification means that you know how to test and is mandatory for any roles you apply for with us. Sorry but I cannot put you forward for this. I suggest you go and sit the exam and get back to us. Goodbye.
************************************************************************************
Me: Hi I am interested in being put forward for the role of chief tester as per the advert you posted this morning on your website.
Agency: Sure, have you already sent us your CV, Resume?
Me: Yes
Agency: OK, what is your name, let us see if we can find your CV
Me: My name is Ivor Certificate
Agency: OK Got your details – just looking at them now, you have been in testing for 8 years now, I see you also hold the ISEB qualification. That means you must know a lot about testing.
Me: No I do not.
Agency: Sorry, you said you do not?
Me: Yes I did say, no I do not know anything about testing
Agency: Then how come you have a ISEB Certification?
Me: Because I noticed that if I had this I could apply for any testing job.
Agency: So you must know something about testing?
Me: Well it is an interesting story. I paid to do the multi choice exam, sat down and completed it in 10 minutes by just randomly answering the questions. No one checks if I have any competence at testing, by luck I managed to get the pass mark required to get the certificate. Hence I am now classed as someone who must know about testing and how to test.
Agency: But it says on your CV that you have been testing for the last 8 years.
Me: Yes I have but I just go in and let others, who are not certified do all the work while I just copy what they have done and claim credit for it. So are you going to put me forward for the role?
Agency: Yes I will you meet the selection criteria, so I cannot see any problems in putting you forward.
Me: Thank You.
************************************************************************************
This may seem like a silly situation but I am sure it could happen in reality. I am not against qualification and people trying to improve themselves but when those qualifications are then used as a filter to exclude people from applying from jobs, it makes me see red.
There are some good examples of proving your ability as a good tester. You can talk to previous companies that you have worked for. They could interview you and talk to you about testing and your thoughts, problem solving ability. However this would take too much time.
I have not as yet found any roles that have stipulated that they require that you attended a Rapid Software Testing Course or The Black Box Software Testing course through the Association for Software Testing. Why is this so? Is it that these types of courses do not have the huge budget to promote themselves? Or that they try to be non profit making?
What is the solution to all of this?
I think within the testing community we need to start educating agencies and companies about how to sort out good from bad testers, how we go about this I am not sure.
Should we have a dedicated website that we can direct agencies to, to explain about certification? I feel this could be a good start and would need someone with far better web skills than myself to get running and also it would need to be unbiased as possible.
Then mail shot the CEOs at each agency when we hit this problem directing them to the website.
Do we try to do presentations at employment agency conferences?
I feel there is a need to educate agencies and companies that are looking to employ testers and give advice on how to spot the good from bad candidates but they need to get rid of the certification filter.
Another thought I had would it be a good idea to have a vetting service for testers and agencies? There could be a one stop service for agencies to verify testers, their abilities and obtain a list of people who would vouch for them.
I tried to think at the weekend if this could work or not. I have a lot of concerns about it being misused and ‘gamed’ by people who have a moral compass that is slightly off balance. How would it be funded? Would it become a monster of its own making? How would testers be vetted and vouched for? Would testers be vetted and vouched based upon their online presence? For example I am sure I could ask a few people online to vouch that I am a good tester however none of these people have worked with me and seen me carry out testing. I could be just saying the right thing at the right time to impress people – how would anyone know unless they have worked with me?
This really brings us back to the beginning of the article, agencies and companies need someway to vet testers and get some guarantee that they know about testing (regardless of which school of thought they follow). So using the certification method is an ideal way to sort out candidates quickly no matter how flawed the certification may be. Do I just bite the bullet and sit the exam if I do not want to be excluded for any testing job? Does anyone have any other methods that agencies or companies can use when they are looking for skilled testers?
Wednesday, 28 July 2010
Reponse to How many test cases by James Christie
James Christie wrote a great blog about his concern on using test cases to measure testing
http://clarotesting.wordpress.com/2010/07/21/but-how-many-test-cases/
and several people blogged a response
Simon Morley added his view here: http://testers-headache.blogspot.com/2010/07/test-case-counting-reflections.html
Jeroen Rosink added his here: http://testconsultant.blogspot.com/2010/07/repsonse-on-how-many-test-cases-by.html
and Abe Heward directed people to a similar blog he had wrote earlier: http://www.abeheward.com/?p=1
Each of these blogs makes very valid points about how non-useful measuring test cases are for indicating the progress or coverage of testing effort.
The aim of this blog is to try and expand upon these posts and see if there are ways in which we could measure testing effort and progress within resorting to using numbers.
To start with we shall take a look at a made up situation.
You are testing on a project which is has a two week testing cycle, your manager has requested that you need to report each day the following:
(Does this seem familiar to anyone?)
So before you start testing you report to your manager that you have 100 test cases to run over the two week cycle
At the end of day one you report the following
So management think cool we are ahead with testing, 60% done in one day.
At the end of day 2 you report:
Management now thinks how come you only ran two test cases today, why are you running slowly? WHAT!!!! Where did those other 100 test cases come from? Did you not do your job correctly to begin with?
However the two you ran today had lots of dependencies and very complex scripts.
Plus your testers noticed that there appeared to be new features that had not been documented or reported, you have now had to add another 100 test cases. Also your testers actually think when they are testing and thought of new edge cases and ways to test the product whilst they were testing.
Management starts to panic – you reported on day one that 60% of testing had been completed. Now you are saying only 30% of the testing has been completed, Stakeholders are not going to happy when we report that we have only covered 30% when the day before I reported to them that 60% had been completed.
This continues, your testing team are really good testers and find more and test ideas which are turned into test cases. So at the end of day seven you report the following:
So at the end of the first week you have only completed 8% of all the test cases. You get fired for incompetence and the project never gets released.
Many people reading this may have experienced something similar to the above, what worries me that there are still people stating the best way to measuring testing is by the use of test cases!
The question now is that if measuring by the use of test cases is not a good way to measure then what can we do?
The following suggestions are my own and what I apply within my test approach, it does not mean it will work for everyone nor am I saying it is the best approach to take. However the purpose of my blog is to offer suggestions about testing that could be useful to some people.
I work in the following testing environment:
If I tried to report the testing effort using the traditional test case scenario it would be of little (or zero) value, since the test case number would be constantly changing.
What we do is split functions, features etc into test charters, as per the exploratory testing approach, these ‘Test Charters’ are known as the test focus areas of the software. If a new function or feature is discovered a new charter is created.
We then use the Session Based Test Management approach (James and Jon Bach - http://www.satisfice.com/sbtm/) and implement sessions based upon mission statements and test ideas. During the testing session the testers are encouraged to come up with new test ideas or new areas to test, these are captured either during the session or during debrief.
The reporting of progress is done at the test charter (test focus area) level. The test manager reports in the following way.
Test focus area 1: -Testing has started – there are a few issues in this area:
Description of Issue x, issue y, issue z.
Which need to be resolved before there is conference that is area is fit for its purpose.
Test focus area 2 – has been tested and is fit for it purpose
Test focus area 3 – test has started and some serious failures have been found defect 1, defect 2, defect 3
And so on.
Some people may ask but how will this tell us if we meet the deadline for testing? I am sure it will NOT tell you if you will finish ALL of your testing before the deadline since testing is an infinite thing, we as testers will carry on testing until we meet a stop heuristic (See Michael Bolton article on stopping heuristics: http://www.developsense.com/blog/2009/09/when-do-we-stop-test/).
The problem with testing is that it is not a yes or no when it comes to the question of have you completed your testing yet. Every time a skilled tester looks at the software they can come up with more and more test areas and test ideas that they could carry out. These may or may not add benefit to the suitability of the software and if it is fit for its purpose. What is required is a test manager that talks to and listens to their test team and see which test areas are the most important and MANAGE test sessions based upon what is critical – basically do some good old prioritizing. The test manger needs to ask the difficult questions of the stakeholder and project managers.
In return the stakeholders and project managers must trust the test team and accept that when they report that an area has been ‘sufficiently’ tested they believe them.
To summarize – instead of reporting on a small area of testing such as test cases, move a couple of level ups and report on the progress for test areas/functions./features based upon the importance of the feature. This may not tell you if you will compete the testing before the deadline but it will show you how well the testing is progressing in each functional area at a level that stakeholders can relate to and understand. The trust your stakeholders will have in you should improve since you are giving them a story about the progress of the testing effort without trying to hide things using numbers.
http://clarotesting.wordpress.com/2010/07/21/but-how-many-test-cases/
and several people blogged a response
Simon Morley added his view here: http://testers-headache.blogspot.com/2010/07/test-case-counting-reflections.html
Jeroen Rosink added his here: http://testconsultant.blogspot.com/2010/07/repsonse-on-how-many-test-cases-by.html
and Abe Heward directed people to a similar blog he had wrote earlier: http://www.abeheward.com/?p=1
Each of these blogs makes very valid points about how non-useful measuring test cases are for indicating the progress or coverage of testing effort.
The aim of this blog is to try and expand upon these posts and see if there are ways in which we could measure testing effort and progress within resorting to using numbers.
To start with we shall take a look at a made up situation.
You are testing on a project which is has a two week testing cycle, your manager has requested that you need to report each day the following:
- How many test cases you have
- How many have been run
- How many have passed
- How many have failed.
(Does this seem familiar to anyone?)
So before you start testing you report to your manager that you have 100 test cases to run over the two week cycle
At the end of day one you report the following
- Test cases ran: 60
- Test cases pass: 59
- Test cases fail: 1
- Defects raised: 1
- Test cases still to run: 40
So management think cool we are ahead with testing, 60% done in one day.
At the end of day 2 you report:
- Test cases ran: 62
- Test cases pass: 61
- Test cases fail: 1
- Defects raised: 1
- Test cases still to run: 138
Management now thinks how come you only ran two test cases today, why are you running slowly? WHAT!!!! Where did those other 100 test cases come from? Did you not do your job correctly to begin with?
However the two you ran today had lots of dependencies and very complex scripts.
Plus your testers noticed that there appeared to be new features that had not been documented or reported, you have now had to add another 100 test cases. Also your testers actually think when they are testing and thought of new edge cases and ways to test the product whilst they were testing.
Management starts to panic – you reported on day one that 60% of testing had been completed. Now you are saying only 30% of the testing has been completed, Stakeholders are not going to happy when we report that we have only covered 30% when the day before I reported to them that 60% had been completed.
This continues, your testing team are really good testers and find more and test ideas which are turned into test cases. So at the end of day seven you report the following:
- Test cases ran: 1200
- Test cases pass: 1109
- Test cases fail: 91
- Defects raised: 99
- Test cases still to run: 10000
So at the end of the first week you have only completed 8% of all the test cases. You get fired for incompetence and the project never gets released.
Many people reading this may have experienced something similar to the above, what worries me that there are still people stating the best way to measuring testing is by the use of test cases!
The question now is that if measuring by the use of test cases is not a good way to measure then what can we do?
The following suggestions are my own and what I apply within my test approach, it does not mean it will work for everyone nor am I saying it is the best approach to take. However the purpose of my blog is to offer suggestions about testing that could be useful to some people.
I work in the following testing environment:
- Agile based – 2 week iterations
- Customer changing requirements frequently
- Code delivered daily
- Functions and features added without supporting documentation
- Use a mixture of scripted and exploratory testing
If I tried to report the testing effort using the traditional test case scenario it would be of little (or zero) value, since the test case number would be constantly changing.
What we do is split functions, features etc into test charters, as per the exploratory testing approach, these ‘Test Charters’ are known as the test focus areas of the software. If a new function or feature is discovered a new charter is created.
We then use the Session Based Test Management approach (James and Jon Bach - http://www.satisfice.com/sbtm/) and implement sessions based upon mission statements and test ideas. During the testing session the testers are encouraged to come up with new test ideas or new areas to test, these are captured either during the session or during debrief.
The reporting of progress is done at the test charter (test focus area) level. The test manager reports in the following way.
Test focus area 1: -Testing has started – there are a few issues in this area:
Description of Issue x, issue y, issue z.
Which need to be resolved before there is conference that is area is fit for its purpose.
Test focus area 2 – has been tested and is fit for it purpose
Test focus area 3 – test has started and some serious failures have been found defect 1, defect 2, defect 3
And so on.
Some people may ask but how will this tell us if we meet the deadline for testing? I am sure it will NOT tell you if you will finish ALL of your testing before the deadline since testing is an infinite thing, we as testers will carry on testing until we meet a stop heuristic (See Michael Bolton article on stopping heuristics: http://www.developsense.com/blog/2009/09/when-do-we-stop-test/).
The problem with testing is that it is not a yes or no when it comes to the question of have you completed your testing yet. Every time a skilled tester looks at the software they can come up with more and more test areas and test ideas that they could carry out. These may or may not add benefit to the suitability of the software and if it is fit for its purpose. What is required is a test manager that talks to and listens to their test team and see which test areas are the most important and MANAGE test sessions based upon what is critical – basically do some good old prioritizing. The test manger needs to ask the difficult questions of the stakeholder and project managers.
- What features can you do without?
- What are the critical areas that are required?
- Function abc has many serious problems – it can cause problems x,y,z for your users. Do you need function abc?
- We have tested all the key functions and found the following problems x,y,z. You want to release tomorrow, are you OK with these known issues?
In return the stakeholders and project managers must trust the test team and accept that when they report that an area has been ‘sufficiently’ tested they believe them.
To summarize – instead of reporting on a small area of testing such as test cases, move a couple of level ups and report on the progress for test areas/functions./features based upon the importance of the feature. This may not tell you if you will compete the testing before the deadline but it will show you how well the testing is progressing in each functional area at a level that stakeholders can relate to and understand. The trust your stakeholders will have in you should improve since you are giving them a story about the progress of the testing effort without trying to hide things using numbers.
Tuesday, 27 July 2010
DANGER - Confirmation Bias
In my previous blog I touched upon a term called Confirmation Bias and how as testers we should be aware of this. I stated that I would put a blog together on the subject so here it is.
I should start by defining what confirmation bias is.
Confirmation bias refers to a type of selective thinking whereby one tends to notice and to look for what confirms one's beliefs, and to ignore, not look for, or undervalue the relevance of what contradicts one's beliefs:- http://www.skepdic.com/confirmbias.html
The reason I started to look more into confirmation bias was due to the following article in Ars Technica - http://arstechnica.com/science/news/2010/07/confirmation-bias-how-to-avoid-it.ars
A good example of this is if you are thinking of buying a new car and all of a sudden you seem to notice lots and lots of the model of the car you was thinking of purchasing. You mind is conditioning itself to notice this make and model of car and making you notice them more, even if there are no more than there was before – you appear to be seeing them everywhere.
Another example is if you start talking to a friend about a certain film and actor and then suddenly notice lots of coincidences, the actor is on a advert, the film is being shown again on TV, a support actor is in another film you just started to watch. The following gives a good example of this. http://youarenotsosmart.com/2010/06/23/confirmation-bias/
If there was no such thing as confirmation bias there would be no conspiracy theories. Conspiracy theories are based upon information which proves the theory correct; those who believe in the theory ignore the evidence that debunks that theory.
So why is there any concern for testers?
Let us start with an example.
You are working closely with the development team and you start to ask them questions about the release you are about to test. You ask their viewpoint on which areas they feel are the most risky and which they feel are the most – so you can adjust your priorities as required, a pretty standard exchange between developers and testers. You now start testing beginning with the area of high risk and work your way to the low risk areas.
You find a few serious bugs in the high risk areas (as expected) and you find no problems in the low risk areas.
After release a major bug is reported in the low risk area you tested. How did you miss the bug? Did you see the bug but your thinking was that everything was working alright? Did confirmation bias play a part? Did your subconscious hide the bug from you? Now this gets very scary, most people who work in software testing know that some bugs try to hide from you, we expect them to hide in the software. What happens if they decide to hide in your brain?
So how can we try and prevent confirmation bias?
The quick and easy way to try and prevent confirmation bias is to ensure that more than one tester tests the same feature, they may bring in their own confirmation bias but hopefully it will be different from the previous testers bias. There is more chance that it will be different if the testers have not discussed the area under test beforehand.
Another way to try and prevent confirmation bias is to do ‘paired testing’ either with a software engineer, another tester or a user. That way you can question each other with regards to what is true and what is false. There is a chance that you could cross contaminate each other with your own confirmation bias, but the risk should be less than if your are working on your own.
It is not easy to remove confirmation bias since it is infectious. The way of working on a software development project requires testers to communicate more and more with other areas of the business and at each stage and with each conversation confirmation bias could be introduced.
So should we lock ourselves away in a dark room with no communication with anyone else on the team? I think I would get out of testing as a career if that happened, the Social Tester (@Rob_Lambert) would now be the anti-social tester, time to get him a ASBO (For our non-UK readers - http://en.wikipedia.org/wiki/Anti-Social_Behaviour_Order)
My view is that there is no realistic way to prevent confirmation bias due to the way software development projects work and that there is a need for everyone to be able to communicate with each other. However if testers are aware that there is such a thing as confirmation bias then they can try and take steps to ensure it does not creep into their testers. That is the whole concept and point of this blog – to help to raise awareness of confirmation bias and how it can effect your testing.
I should start by defining what confirmation bias is.
Confirmation bias refers to a type of selective thinking whereby one tends to notice and to look for what confirms one's beliefs, and to ignore, not look for, or undervalue the relevance of what contradicts one's beliefs:- http://www.skepdic.com/confirmbias.html
The reason I started to look more into confirmation bias was due to the following article in Ars Technica - http://arstechnica.com/science/news/2010/07/confirmation-bias-how-to-avoid-it.ars
A good example of this is if you are thinking of buying a new car and all of a sudden you seem to notice lots and lots of the model of the car you was thinking of purchasing. You mind is conditioning itself to notice this make and model of car and making you notice them more, even if there are no more than there was before – you appear to be seeing them everywhere.
Another example is if you start talking to a friend about a certain film and actor and then suddenly notice lots of coincidences, the actor is on a advert, the film is being shown again on TV, a support actor is in another film you just started to watch. The following gives a good example of this. http://youarenotsosmart.com/2010/06/23/confirmation-bias/
If there was no such thing as confirmation bias there would be no conspiracy theories. Conspiracy theories are based upon information which proves the theory correct; those who believe in the theory ignore the evidence that debunks that theory.
So why is there any concern for testers?
Let us start with an example.
You are working closely with the development team and you start to ask them questions about the release you are about to test. You ask their viewpoint on which areas they feel are the most risky and which they feel are the most – so you can adjust your priorities as required, a pretty standard exchange between developers and testers. You now start testing beginning with the area of high risk and work your way to the low risk areas.
You find a few serious bugs in the high risk areas (as expected) and you find no problems in the low risk areas.
After release a major bug is reported in the low risk area you tested. How did you miss the bug? Did you see the bug but your thinking was that everything was working alright? Did confirmation bias play a part? Did your subconscious hide the bug from you? Now this gets very scary, most people who work in software testing know that some bugs try to hide from you, we expect them to hide in the software. What happens if they decide to hide in your brain?
So how can we try and prevent confirmation bias?
The quick and easy way to try and prevent confirmation bias is to ensure that more than one tester tests the same feature, they may bring in their own confirmation bias but hopefully it will be different from the previous testers bias. There is more chance that it will be different if the testers have not discussed the area under test beforehand.
Another way to try and prevent confirmation bias is to do ‘paired testing’ either with a software engineer, another tester or a user. That way you can question each other with regards to what is true and what is false. There is a chance that you could cross contaminate each other with your own confirmation bias, but the risk should be less than if your are working on your own.
It is not easy to remove confirmation bias since it is infectious. The way of working on a software development project requires testers to communicate more and more with other areas of the business and at each stage and with each conversation confirmation bias could be introduced.
So should we lock ourselves away in a dark room with no communication with anyone else on the team? I think I would get out of testing as a career if that happened, the Social Tester (@Rob_Lambert) would now be the anti-social tester, time to get him a ASBO (For our non-UK readers - http://en.wikipedia.org/wiki/Anti-Social_Behaviour_Order)
My view is that there is no realistic way to prevent confirmation bias due to the way software development projects work and that there is a need for everyone to be able to communicate with each other. However if testers are aware that there is such a thing as confirmation bias then they can try and take steps to ensure it does not creep into their testers. That is the whole concept and point of this blog – to help to raise awareness of confirmation bias and how it can effect your testing.
Monday, 19 July 2010
The Emotional Tester (Part 2)
The first part of this blog looked at how our emotions could affect how we test. This second part will look at how we could capture our feelings when testing and could this provide us with any useful information about the product we are testing. Could it prove to be a useful oracle when testing?
On twitter @testsidestory said the following:
That is done regularly in usability labs: capture emotions and facial expressions of the users as they use the s/w
This was in response to a question that I posted on twitter:
…. - what I am thinking is that we need to capture our mood when #testing it could indicate a problem in the s/w…
The concern with this is that it would be very expensive to implement for the majority of people. I thought how we could implement a system that could capture emotional state and be effective and inexpensive.
One idea I had was to use a concept from the book Blink by Malcolm Gladwell, in which Malcolm talks about how important our initial emotion/reaction is when we first encounter something. There is a discussion about how often our ‘gut reaction’ proves to be correct and he uses an example of a statue that a gallery had bought after a lot of scientific experts, who had tested the statue, had said the statue was genuine. A couple of art experts who got to see the statue before it was unveiled in private viewings had a ‘feeling; that there was something wrong about the statue, their initial gut reaction was telling them it was a fake. Several months latter it was discovered to be a fake.
The above is a concise retelling of the story within the book, however why did the scientific experts get it so wrong? Could it be that conformation bias played a part? The scientific experts wanted so much to believe that it was real and not fake they caused bias in the results or avoided obvious facts that pointed to it being a fake. I think confirmation bias is a great subject and one I will look at from a testing perspective sometime in the future.
I should state that I have not tried any the following ideas and that if anyone would love to volunteer within their organizations to ‘trial’ the ideas out I would be most interested. Due to circumstances I currently do not have the ability to try this out on a large scale.
The first problem we face is how we capture out initial reaction to what we are testing. The requirements for this are that it is:
My thought is to use different smiley’s which are simple and quick to create and capture thus covering all the requirements.
My idea would be to use three different smiley’s:

Why use smiley’s?
The idea as to why use smiley’s is that anyone can draw them no matter how artistic and from the perspective of measurements it is very easy to recognize and see pasterns when using such well known symbols. The other longer term thought was that it is easy to extend to add sad, angry, and extremely happy if you wish to improve the range of emotions and feelings.
Capturing the initial feeling/emotion.
If you are working in an environment in which you are carrying out exploratory testing and following mission statements (Session based testing) then this is very simple to implement. The idea is that when the tester starts their mission (session) they should within the first couple of minutes (5 at a max) record their emotion/feeling of the software by the use of the smiley’s.
If this was done for every session being run and captured in such a way that it would be easy to see at a glance which areas (test charters) testers are unhappy with it could provide some useful information.
So you now have a whole set of data with regards to the testers initial feeling about the software there are testing, what does this information tell you?
For example a certain test focus area shows that all the testers are unhappy in that area would this indicate a problem? I feel it could indicate something wrong in that area but you would need to talk to the testers and gather more information (obtain context) I think the great thing about capturing initial feelings towards the software could help the development teams to focus on areas where there could be implied problems based upon initial feeling.
This approach could be taken a step further and get the testers to add another smiley when they have finished the session to see how they feel about the software after they have finished their session. You now have two sets of data and can compare any discrepancies with the two.
What would you think if the majority of testers were happy about a certain test focus area but at the end of the session they were unhappy?
Does this indicate a problem?
Or what if it was the opposite mostly unhappy and at end of session they were happy?
Also if they were unhappy at the beginning and at the end, their gut reaction proves to be correct, does this give an indicator that there are some major issues within that area?
Could this indicate frustration with the system, lack of knowledge maybe?
In my opinion this approach could provide to be a very useful oracle to the quality of the software.
What do think?
Could this prove to be useful?
I would love some feedback on this idea - good or bad.
On twitter @testsidestory said the following:
That is done regularly in usability labs: capture emotions and facial expressions of the users as they use the s/w
This was in response to a question that I posted on twitter:
…. - what I am thinking is that we need to capture our mood when #testing it could indicate a problem in the s/w…
The concern with this is that it would be very expensive to implement for the majority of people. I thought how we could implement a system that could capture emotional state and be effective and inexpensive.
One idea I had was to use a concept from the book Blink by Malcolm Gladwell, in which Malcolm talks about how important our initial emotion/reaction is when we first encounter something. There is a discussion about how often our ‘gut reaction’ proves to be correct and he uses an example of a statue that a gallery had bought after a lot of scientific experts, who had tested the statue, had said the statue was genuine. A couple of art experts who got to see the statue before it was unveiled in private viewings had a ‘feeling; that there was something wrong about the statue, their initial gut reaction was telling them it was a fake. Several months latter it was discovered to be a fake.
The above is a concise retelling of the story within the book, however why did the scientific experts get it so wrong? Could it be that conformation bias played a part? The scientific experts wanted so much to believe that it was real and not fake they caused bias in the results or avoided obvious facts that pointed to it being a fake. I think confirmation bias is a great subject and one I will look at from a testing perspective sometime in the future.
- So can we use this ‘gut reaction’ concept in testing?
- Would it be of any value?
I should state that I have not tried any the following ideas and that if anyone would love to volunteer within their organizations to ‘trial’ the ideas out I would be most interested. Due to circumstances I currently do not have the ability to try this out on a large scale.
The first problem we face is how we capture out initial reaction to what we are testing. The requirements for this are that it is:
- Easy to capture
- Simple
- Quick
My thought is to use different smiley’s which are simple and quick to create and capture thus covering all the requirements.
My idea would be to use three different smiley’s:
- Happy
- Neutral
- Unhappy
Why use smiley’s?
The idea as to why use smiley’s is that anyone can draw them no matter how artistic and from the perspective of measurements it is very easy to recognize and see pasterns when using such well known symbols. The other longer term thought was that it is easy to extend to add sad, angry, and extremely happy if you wish to improve the range of emotions and feelings.
Capturing the initial feeling/emotion.
If you are working in an environment in which you are carrying out exploratory testing and following mission statements (Session based testing) then this is very simple to implement. The idea is that when the tester starts their mission (session) they should within the first couple of minutes (5 at a max) record their emotion/feeling of the software by the use of the smiley’s.
If this was done for every session being run and captured in such a way that it would be easy to see at a glance which areas (test charters) testers are unhappy with it could provide some useful information.
So you now have a whole set of data with regards to the testers initial feeling about the software there are testing, what does this information tell you?
For example a certain test focus area shows that all the testers are unhappy in that area would this indicate a problem? I feel it could indicate something wrong in that area but you would need to talk to the testers and gather more information (obtain context) I think the great thing about capturing initial feelings towards the software could help the development teams to focus on areas where there could be implied problems based upon initial feeling.
This approach could be taken a step further and get the testers to add another smiley when they have finished the session to see how they feel about the software after they have finished their session. You now have two sets of data and can compare any discrepancies with the two.
What would you think if the majority of testers were happy about a certain test focus area but at the end of the session they were unhappy?
Does this indicate a problem?
Or what if it was the opposite mostly unhappy and at end of session they were happy?
Also if they were unhappy at the beginning and at the end, their gut reaction proves to be correct, does this give an indicator that there are some major issues within that area?
Could this indicate frustration with the system, lack of knowledge maybe?
In my opinion this approach could provide to be a very useful oracle to the quality of the software.
What do think?
Could this prove to be useful?
I would love some feedback on this idea - good or bad.
Friday, 16 July 2010
The Emotional Tester (PART 1)
This blog is going to be in two parts, the first will focus on the question of do emotions affect the quality of testing. The second will look at ways in which we can gather information about how we feel about the product we are testing to see if there is any value in capturing this information.
I have an amateur interest in psychology and how some of the ideas and thoughts from this area of science can be used in software testing. I was reading ‘The Psychology of Problem Solving’ by Janet E. Davidson & Robert J. Sternberg and it had a section on how emotions affect the way we think and focus.
So I decided to tweet a question based on some of the information I had read:
Emotions and #Testing:-Do we find more bugs when we are in a bad mood? Psychology research shows we are more negative when in bad mood.
It would be interesting to have feedback from #testing community on this - Does this mean a good tester has to be a grumpy so and so... :o)
It was not long before I started to receive replies on this.
@Rob_Lamber: @steveo1967 I don't really attribute negativity to being good at finding bugs. Positive attitude, passion, inclination...not negativity
@nitinpurswani I @steveo1967 i cannot think when i am in bad mood and i guess sapient testers think
@ruudcox @steveo1967 This article might help: Mood Literally Affects How We See World. http://www.medicinenet.com/script/main/art.asp?articlekey=100974
This turned in to a lively debate on which mood is better for testing.
After reading various articles there appeared to be some common ground on how we think and see things based upon our emotions and mood.
Looking at the article suggested by @ruudox this suggested that when in a good mood we can see the whole picture and when in an unhappy mood we narrow our focus.
This appears to be backed up by research from Foless & Schwarz
Individuals in a sad mood are likely to use a systematic, data driven bottom-up strategy of information processing, with considerable attention to detail In contrast, individuals in a happy mood are likely to rely on pre-existing general knowledge structures, using a top-down heuristic strategy of information processing, with less attention to detail (foless & Schwarz, 1999;).
This now leads to some complex dilemmas, and the whole point of this blog.
Which mood is best for someone whose is a professional tester?
Which mood is more than likely to find more bugs when testing?
What other influences can affect our ability to test well?
My thoughts indicate from the information and research I have read that to be really good at testing and finding defects we need to be in a sad or unhappy mood.
Research concludes that when in a sad or unhappy mood we are more than likely to focus in on the task and step though in a data driven way. When happy we are more than likely to see the whole of the picture and look at the task from a top down approach.
Now in my opinion both of these traits are needed to be excellent testers. So do testers need to have split personalities that they can switch on or off?
The point made by @nitinpurswani about being in a bad mood stops him thinking and that to be a sapient tester he needs to think. This got me thinking and I asked him a question back.
@nitinpurswani I like that idea. However if you're in a bad mood with what u are #testing would it make you want to break it more?
My thought behind this is that if something is annoying me or irritating me I feel I am more than likely work harder to find out why it is annoying me. I become deeply focused on the problem in front of me. Does this mean I am in a bad mood? Not necessarily so – it could be I am annoyed at what I am testing but not in a bad mood in general.
When in a happy mood when testing it is easy to just let things go, we unconsciously think well that is not too much of problem we can forget about it. This is a dangerous attitude to have as a tester because this simple little problem can come back to be huge problems. Someone in an unhappy mood is more than likely to investigate why this thing is annoying and find the underlining cause.
@Rob_Lambert made a very valid point that there are environmental issues that could come into play. How many testers when testing listen to music? Rob suggested that the type or style of music you are listening to can influence the mood you are in and as a side effect the way you are thinking. I had not thought about this very much but going deeper than this – if you are working in a open office and everyone around you is having a laugh and joking would this make your testing better or worse? What if a tester and a developer are having a heated debate about something that has just been tested? Will this influence your testing?
Does any of this article back up my earlier tweet that testers need to be grumpy so and sos?
However I think this view is too simplistic. I am often asked about testers and how they are different from developers. (There is still a big drive within testing that developer and tester can be the same person and be able to switch between the different roles). I have a feeling that some of the best testers can switch between different psychological emotional states when testing. They have the best of both worlds. Able to remain focused when something is bugging them and then when they have solved what is bugging them able to switch to a whole picture view of the system they are testing.
When I started to write this article I thought it would be very simple to come to a conclusion about how emotions can affect our ability to test and what is the best mood to be in to get the best out of testing. It has proven more difficult than I thought and I still have not come to any firm conclusion about which is the best.
The one interesting point that should be made is that as professional testers we need to be aware of our emotions and how they can affect the quality of the testing we are doing. Part 2 of this blog will be looking at how we can capture our emotion and feelings about the product we are testing and see if this could provide useful information.
I have an amateur interest in psychology and how some of the ideas and thoughts from this area of science can be used in software testing. I was reading ‘The Psychology of Problem Solving’ by Janet E. Davidson & Robert J. Sternberg and it had a section on how emotions affect the way we think and focus.
So I decided to tweet a question based on some of the information I had read:
Emotions and #Testing:-Do we find more bugs when we are in a bad mood? Psychology research shows we are more negative when in bad mood.
It would be interesting to have feedback from #testing community on this - Does this mean a good tester has to be a grumpy so and so... :o)
It was not long before I started to receive replies on this.
@Rob_Lamber: @steveo1967 I don't really attribute negativity to being good at finding bugs. Positive attitude, passion, inclination...not negativity
@nitinpurswani I @steveo1967 i cannot think when i am in bad mood and i guess sapient testers think
@ruudcox @steveo1967 This article might help: Mood Literally Affects How We See World. http://www.medicinenet.com/script/main/art.asp?articlekey=100974
This turned in to a lively debate on which mood is better for testing.
After reading various articles there appeared to be some common ground on how we think and see things based upon our emotions and mood.
Looking at the article suggested by @ruudox this suggested that when in a good mood we can see the whole picture and when in an unhappy mood we narrow our focus.
This appears to be backed up by research from Foless & Schwarz
Individuals in a sad mood are likely to use a systematic, data driven bottom-up strategy of information processing, with considerable attention to detail In contrast, individuals in a happy mood are likely to rely on pre-existing general knowledge structures, using a top-down heuristic strategy of information processing, with less attention to detail (foless & Schwarz, 1999;).
This now leads to some complex dilemmas, and the whole point of this blog.
Which mood is best for someone whose is a professional tester?
Which mood is more than likely to find more bugs when testing?
What other influences can affect our ability to test well?
My thoughts indicate from the information and research I have read that to be really good at testing and finding defects we need to be in a sad or unhappy mood.
Research concludes that when in a sad or unhappy mood we are more than likely to focus in on the task and step though in a data driven way. When happy we are more than likely to see the whole of the picture and look at the task from a top down approach.
Now in my opinion both of these traits are needed to be excellent testers. So do testers need to have split personalities that they can switch on or off?
The point made by @nitinpurswani about being in a bad mood stops him thinking and that to be a sapient tester he needs to think. This got me thinking and I asked him a question back.
@nitinpurswani I like that idea. However if you're in a bad mood with what u are #testing would it make you want to break it more?
My thought behind this is that if something is annoying me or irritating me I feel I am more than likely work harder to find out why it is annoying me. I become deeply focused on the problem in front of me. Does this mean I am in a bad mood? Not necessarily so – it could be I am annoyed at what I am testing but not in a bad mood in general.
When in a happy mood when testing it is easy to just let things go, we unconsciously think well that is not too much of problem we can forget about it. This is a dangerous attitude to have as a tester because this simple little problem can come back to be huge problems. Someone in an unhappy mood is more than likely to investigate why this thing is annoying and find the underlining cause.
@Rob_Lambert made a very valid point that there are environmental issues that could come into play. How many testers when testing listen to music? Rob suggested that the type or style of music you are listening to can influence the mood you are in and as a side effect the way you are thinking. I had not thought about this very much but going deeper than this – if you are working in a open office and everyone around you is having a laugh and joking would this make your testing better or worse? What if a tester and a developer are having a heated debate about something that has just been tested? Will this influence your testing?
Does any of this article back up my earlier tweet that testers need to be grumpy so and sos?
However I think this view is too simplistic. I am often asked about testers and how they are different from developers. (There is still a big drive within testing that developer and tester can be the same person and be able to switch between the different roles). I have a feeling that some of the best testers can switch between different psychological emotional states when testing. They have the best of both worlds. Able to remain focused when something is bugging them and then when they have solved what is bugging them able to switch to a whole picture view of the system they are testing.
When I started to write this article I thought it would be very simple to come to a conclusion about how emotions can affect our ability to test and what is the best mood to be in to get the best out of testing. It has proven more difficult than I thought and I still have not come to any firm conclusion about which is the best.
The one interesting point that should be made is that as professional testers we need to be aware of our emotions and how they can affect the quality of the testing we are doing. Part 2 of this blog will be looking at how we can capture our emotion and feelings about the product we are testing and see if this could provide useful information.
Subscribe to:
Posts (Atom)