Tuesday, 30 April 2013

A tester’s bookshelf

The inspiration for this post is from a blog post I saw by Paul Gerrard on top ten books for testers.  Which was in turn inspired from a question asked by Huib Schoots via the Software Testing Club.

I thought I would put a different take on the question and not put together a top ten list or anything like that but more a list of books I have recently read, am currently reading or intend to read soon.  This is not a post recommending these books, rather an insight into my eclectic choice of reading and for some of you who read my blog may find it useful.

Recently Read

Antifragile: Things that Gain from Disorder -  Nassim Nicholas Taleb
A great book for testers, it talks about how we can get better by subjecting ourselves to more stress and disorder. There are many connections to how we think and do testing.   How we can do small things to gain large rewards.   There is a video from the New York Public Library of a discussion between Nassim Taleb and Daniel Kahneman about how do we make decisions when faced with uncertainty, I highly recommend watching this.

The Hundred-Year-Old Man Who Climbed Out of the Window and Disappeared - Jonas Jonasson 
One of the best fiction books that I have read this year a great story and one I enjoyed.  The book tells the life story of a 100 year old man and the famous people whom he met on the way.

You are Not So Smart: Why Your Memory is Mostly Fiction, Why You Have Too Many Friends on Facebook, and 46 Other Ways you’re Deluding Yourself.  David McRaney
A nice and easy book to read and some great biases and cognitive illusions explained in simple easy language.  If you are a tester you may find some of this very useful. 

A street Cat called Bob – James Bowen
I am a cat person (get over it).  A true and remarkable story of a homeless man who finds a cat takes him under his care.   – You Tube has a cool Video of James and Bob

Explore It!: Reduce Risk and Increase Confidence with Exploratory Testing -  Elisabeth Hendrickson
Once of the best books I have read on using exploratory testing (outside of the Rapid Software testing course).  It is permanently on my desk at work and one I refer to at least once a week for ideas when I am suffering from a testing block.  It is great for inspiring ideas and thoughts on all aspects of testing.


Tacit and Explicit Knowledge -  Harry Collins
Started this on recommendation from Michael Bolton and I am only a couple of chapters in and finding so much related to testing and especially the way we learn. I feel this could have a significant impact on the way we teach testing.

Blah Blah Blah – What to do when words don’t work – Dan Roam
A great book on how we report information – almost finished this book and has some ideas on how I can report better and convey information in a way in which I am not misleading people.

Intend to read next

This explains everything – John Brockman
What drew me to this book was the names mentioned on the cover, Nassim Taleb , Richard Dawkins  , Steven Pinker , Martin Rees – to name a few.  All who have wrote books I have read hence it piqued my interest – hopefully we be as good as I expect.

Game storming – Dave Gray
I have an interest in how we can be more creative and create more ideas, very useful for when testing and especially test planning.  So I got this book based upon the concept that it contains useful techniques to improve idea generation and creativity.

Making Social Science matter – Brennt Flyvbjerg
I have finally got around to adding this to my bookshelf after being recommended sometime ago by Michael Bolton (again).  I have a great interest in social science and how we can within testing learn from this area.  This book presents new thoughts and thinking on how to approach social science research.  I feel there could be some good connections with testing within this book based upon what I have read of the reviews.

This is by no means an exhaustive list and there are many more books I could have included in the recently read section but I wanted a nice short blog post for a change.  I should note that the books I intend to read in the future can also change based upon what interests me at that time.  This is the reason I could not do a top ten list of books, the list would depend so much on context.  It would depend on so many different things and I would find it hard to narrow it down to one context since within most books I read I can relate something back to testing, even the fiction books.

Wednesday, 24 April 2013

Testing Qualifications - Certification revisited

Let me start this article by saying I am not against testing qualification at all and they can be a useful resource. Note I use the word qualification; certification is getting to become a bit of a loaded word. Richard Bradshaw talked about his experience here.  I tend to agree with what Richard is saying, as a starting point ISQTB (Or whatever name they wish to use) is a good idea and concept.  For those who have never been involved in testing and wish to know more about ‘some’ of the techniques and methods and a little about the history of testing then it can be a useful building block.  My concern is that it is not being sold in this way and that it becomes a filter to be used by companies and employment agencies.  I have talked about this previously here.   Danny Dainton also talked about his views on ISQTB from the new tester perspective here 

The reason why I have revisited this discussion is due to the recent debate on twitter about test certification and especially ISQTB.  Part of the discussion was the way in which numbers appear to have been used in an advertisement for companies to ensure their testing teams get the ISQTB certification.

Advert can be seen here. Googling a little more provided me with this information 

The bit that concerns me with this sort of misleading propaganda is this:
Various studies estimate the cost of a post-production software defect in the range of $4,000 - $5,000.[1] If ISTQB Software Tester Certification can help a software tester to eliminate just one post-production defect in his or her career, the return on investment for an ISTQB exam could exceed 1500%. With the Volume Purchase Program, that ROI could exceed 2000%.
Where are these figures obtained from?  It is a fallacy that with the way software development works today that fixing a defect later costs so much more.  Details of the cost of defect fallacy can be found here and ANYONE working in software development please read this great  book – The leprechauns of software engineering

This then lead to a talk that Rex Black was doing at STPCON in which the phrase:

“…greatly lowers the cost of post-release defects.”

Can clearly be seen

This is what concerns me a great deal, everything within the ISQTB world appears to be focused on profit and how to maximize the most amount of money with the least amount of effort.  Now someone may correct me if I am wrong on that one.  I do hope that I am and that the people who provide this training do really care about the testing profession and the people paying their hard earned money is of something that is worthwhile to them.  The problem is when profit is involved in learning activities the needs and the best interests of the student are normally lower down the list than the ROI (sic) for the training company shareholders.

I am not against testing qualifications I am against the way in which they are being sold and used within the profession.  I do not like the use of multiple choice exams in which someone can learn by rote and then pass and not know anything really valuable about testing.

We need a system of learning in which we can learn the basics, practice them and be assessed on our thinking and reasoning. The problem is that this is too difficult to do en masse since it eats into profit hence my concerns about profit before learning.

There are other testing training opportunities such as the Rapid software testing course by James Bach and Michael Bolton is another alternative.

Or  black box software testing course created by Cem Kaner is a step in the right direction.

How come we do not hear more about these?

Is the lobbying and scaremongering of the ISQTB too big?  I really hope not there are many passionate testers in the world and we have a moral and ethical obligation to provide the correct training and learning opportunities for these people.  We need to stop using false data and information to scare companies and managers into making people attend these courses

We need to be truthful to both those attending such courses and those paying for these courses.   Maybe there should be a disclaimer on the ISQTB website?

  • This foundation exam will teach you about some methods and techniques of testing however it will NOT be able to prove the testing competencies or abilities of those doing the examination.
  • It will be able to tell you that the person doing the exam has a good ability to remember stuff or got very lucky when selecting multi-choice answers at random.

I think I would be more comfortable with ISQTB if they provided alternatives and did not sell the exam as a way to be competent tester.

Off track slightly – when I decided to become a rugby league coach many years ago I had to do the following

  • exam
  • a practical assessment with the trainer
  • 3 assessments in the field with an assessor
  • keep a training diary for a year, of new stuff I learnt or stuff I implemented 

Only after that year could I class myself as a competent rugby coach.   Hmmm that is an idea maybe ISQTB could do that kind of assessment and training?

We are all responsible for our own training and learning and there are many ways in which we can learn. The problem is I see time and time again many people who call themselves testers and have done no learning or training specifically about testing since sitting the ISQTB foundation course.  This is what really needs to change in our craft.  We need to have a passion for what we enjoy.

I will finish this article with something I tweeted during the discussion and something I really do believe in

Testing is about asking questions and using critical and creative thinking.  It cannot be measured with a simple pass/fail

Friday, 12 April 2013

Creative and Critical Thinking and Testing Part 7

This is the final post in this series on creative and creative thinking and testing.  It has been a journey of discvoery for myself and along the way I have found out that there is more to how we think when testing than even I first thought and all of this came about from an initial posit-it note diagram. Along this journey we have:

Looked at the thinking required for

This final post of the series will look at the styles of thinking required when we are reporting our testing

Test Reporting

So after the planning execution and analysis you are now ready to report your finding.  The style of thinking required for this phase appears to be obvious in that you need to be creative in how you are going to present the information you have found.  You need to make sure that it is clear and easy for your reader to understand without any possible chance of it being misunderstood or more dangerously misused.  To do this you will need to think a little bit critically and ask yourself can the following about the information you are presenting:

  • Can it be interpreted in different ways?
  • Is the most important information clearly shown (in the context of what is important to the reader)?
  • Have I made the context clear?
  • If using numbers am I using this to back up a story?
  • Have I made the main risks and issues very clear?
  • Is what I am reporting ethnically and morally correct?

There are many more questions that you can ask yourself but the key of this level of critical thinking is to ensure you are objective about what you report and unbiased as possible  in your reporting.  There are a few methods that can used to help with reporting and we can learn a little from the medical world and how they use critical thinking in helping with the medical research reporting

“Students should also be encouraged to think about why some items need to be reported whereas others do not”

It is important to think about what should be not included since this aids clarity for the reader.
Returning to creative thinking one effective and creative way to represent your testing information is by the use of dashboards.

James Bach talked about a low tech testing dashboard on the rapid software testing course and has some information about it on his website

Low Tech Testing Dashboard Example – James Bach - taken from these slides

Del Dewar went slightly further and presented the dashboard as shown below.

More information on what each of the columns numbers or colours mean can be found on the links – in other words I want you to do a bit of research into dashboard and how useful they may be for your own test reporting.

From my own experience of using these styles of dashboards for test reporting what I found was that it gave a very quick overview of the project and of the issue but was not good at reporting progress which is something that test management required and this leads to storytelling and metrics.

Session Notes and Wiki Links
One more thing to add here is that when I tried the above with senior management teams there was a request for access to the data of what was actually tested, in other words they wanted access to the session notes and actual evidence of what had been done.  To solve this at the top level of each dashboard we provided a link to the wiki sessions where we kept all the session notes. I encourage you to have full transparency of testing effort and allow access to all who wants to see what testing has taken place and I feel it helps if there are no barriers or logins in the way for people to be able to access the raw data.

If as we described earlier in this document we are using session based test management then we should be producing evidence of what we have tested and the information we have found as we go along and test and we are using whatever is the best method for capturing this, video, screen capture, wiki.  This should be in a place in which all have access to and everyone (who matters) is aware of its location.

The next thing that you need to do with your test reporting is to tell a story or two.  This again requires some deep critical thinking.  Michael Bolton says that test reporting is about the telling of three stories at three levels.  I provide a quick summary of this below for full details refer to the original article link available here.

  • Story of the product – this is where you tell the story of the product using qualitative measures, using words and creative description of the product and what it does, did. What you found interesting.
  • Story about testing – this is used to back up your story of the product, what did you do when testing, what did you see and what did you cover.  It is about where you looked and how you looked and where you have as yet not looked.
  • Story about progress – the final story is the one about why you think the testing you did was good enough and why what you have not tested was less important (critical thinking)

Michael has a lot more information about test reporting in a great series of articles:

Markus Gärtner summaries this in his article titled “Tell a story at the daily stand up”.

As can be seen from the articles published by Michael Bolton you quickly switch from one style of thinking to the other depending on the context of the story you are telling.  This is a difficult skill for a tester to master but once you practice it you can become an expert test report story teller.

Another way in which you can be creative and report your testing as well as your test planning is by using mind maps. Darren McMillan produced a great article on this and it can be found here.

It is important also at this stage to remember about your test plan and look at what information you will need to update in this.  From what you found out during testing and how you risks and priorities may have changed need to be reflected in your test plan.

Qualitative vs Quantitative
There have been many discussions within the testing community about qualitative and quantitative measurements some of which I will share here as useful information.  It is very easy to fall into the trap that numbers tells the whole story and since they are easy to collect will provide all the information that management require.  I think we need to be careful of this and use our critical thinking to make a judgement on what the numbers really provide.

Cem Kaner has an excellent article on the validity of metrics here and the thing I most noted about this was the following:

“If our customers demand these metrics then we provide them but we have a moral & ethical duty to inform that they are flawed”

I agree with this but we need to use both our critical and creative thinking to provide the story to go with the metrics.  I think we all agree that quantitative measures are flawed but we need to be able think creatively to resolve this and provide the information the customer requires in a way which is simple and easy for them to understand without misleading anyone.

Some of the discussions within testing community of test metrics


So you have got to the end of this article and hopefully have a understanding that the different stages of testing requires different types of thinking at different levels.  So what do you do now?  First of all this is not the end of the journey.  You now go back to the start and continue until told not to at the same time you can continue to practice some of the lessons and examples given in this document.  Improve them, be creative and create your own, adapt them to fit your needs.  This is not a document of best practice it is a springboard to help you, a reference guide, if you like, that can be modified and act as a starting point for you to improve and learn more about what style of thinking you may need when testing.

The important lesson is that testing requires a great deal of thinking and if you are not thinking when you are involved in any testing activity then maybe, just maybe, you may not be testing.


John Stevenson

Thursday, 4 April 2013

Creative and Critical Thinking and Testing Part 6

The previous articles in this series have looked at what is critical and creative thinkingdefining the stages of testing, looking at the thinking required for documentation reviewtest planning and test execution.  This article looks at the test analysis phase and the styles of thinking that may be best for this stage as described in the diagram from the first article of this series

Test Analysis

So you have finished your testing session and gather together all sorts of evidence of what you have uncovered, discovered and learnt.  So now is the time to look in detail at what you have done and found and apply some critical thinking to the task.

This is one stage within testing which I feel is understated and often not a great amount of time and effort spent on it.  However I think it is one of the most valuable for the tester who carried out the testing since it allows them to analyse what they have done and to think critically about themselves and see if there improvements they can made.

It is interesting if you ‘Google’ test Analyst and ‘definition’ the variety of the responses that are returned.   A selection of abstracts is shown below:

“A test analyst reviews the results of process tests in a company's operating systems or manufacturing processes. The analyst also researches potential defects and works in tandem with engineers to provide solutions.” 
(Ehow.com – Test Analyst Job Description)
“In the position of test analyst, a person fulfils an important role by reviewing results from process tests in a business’s manufacturing or operating systems. The analyst will also research potential deficiencies and work together with engineers in order to provide solutions” 
 (Job is Job Test Analyst job description)
“The Senior Test Analyst is responsible for designing, developing, and executing quality assurance and control processes, test strategies, test plans and test cases that verify a software conformance to defined acceptance criteria (i.e. System behaviours) and feature design documents, as well as application standards”  
(Aus registry job specification Senior Test Analyst)
“Works with the Testing team to assist with the preparation of test plans and the testing of software in line with company guidelines and standards”  
(Vector UK Job description Test Analyst)

What I find interesting that there appears to be two main definitions of a test analyst one who analysers what has been tested and one that plans, executes and analysers the test results.  It appears over time that what could or may have been a specialist role has now become one which is interchangeable with the title of ‘tester’. There is nothing wrong with this and it may slightly digress from the purpose of this section but I thought it was a useful comparison of what the definition of what a test analyst is.

In my own world it is a stage that all testers need to be proficient in and having the necessary skills and thinking to carry out the task. The analytical skills of testers IMO appears to be a forgotten skill or one in which less importance is being placed.


The first thing that should be done after you have completed your test execution phase is to reflect on all you have done and the best way to do this is to debrief and talk to other people.  There are many ways that you can do this but one model within the session based testing management frame is by the use of PROOF  (expanded to PROOF LA).  This is a very useful and quick way to explain what you have done, what you need do and what you plan to do by the use of some simple questions.

  • Past. What happened during the session?
  • Results. What was achieved during the session?
  • Obstacles. What got in the way of good testing?
  • Outlook. What still needs to be done?
  • Feelings. How does the tester feel about all this
  • Learning.  What have you learnt? What could you do better?
  • Adaptation. What do you need to adapt or change?

If you working in small teams then it may not be possible to do this style of review however there is nothing to stop you doing some self-reflection and using these question to debrief yourself.  You may notice that these questions require both critical and creative thinking in equal measures. The questions will make you either think critically about what you have done or need to do against what you could do better and improvement ideas that your creative thinking would help generate.


When you have completed the debrief it is important to spend a little more time thinking about the ‘new’ stuff you have learnt.  This is valuable information that could be used not just for you but for other people. Using the evidence you have gathered you could put together some ‘how to guides ‘if there were parts of what you did that was surprising or difficult.  This also aids your own memory and helps to reinforce your learning.  The added benefit is that others can look at this and use it to aid their own learning and understanding.  The way I implement this is by the use of a wiki in which for the project we are working on we have a section in which we link or create useful information.


Looking at your testing evidence the next thing you may want to do is think critically about the issues you found which could be defects.  You may want to first try and repeat the issue to ensure it is reproducible. (Some might not be) You may want to talk to a developer, customer or architect to discuss if it is really a defect.  If after thinking about the issue you may then want to raise it as a defect and attach the evidence you captured during the test execution phase.  To create a good defect report requires a lot of critical thinking on the part of the tester and I would highly recommend that you participate in the Bug Advocacy course or at least work your way through the ‘free’ course material.  If you really want to get your defects fixed you need to present a compelling argument as to why this defect needs to be fixed, this course can help you achieve this.

*To do the Bug Advocacy course you need to complete the Black Box Software Testing foundations course first.


Once you have raised your defects you could now use a mixture of creative and critical thinking to see what of the evidence you have gathered what would prove to be useful to automate.  In this context I am talking about the use of BDD test frameworks.  At the same time it could be worth using some creative thinking to see what automation tools could help support your exploratory testing.  It is useful to remember that automation is not just a checking exercise but can provide invaluable to exploratory testing to aid their testing.  Michael Bolton wrote a blog post on this subject where he talks about manual and automated testing and the meaningless use of labels.


One area in which people when carrying out exploratory testing appear to miss out on is looking for future opportunities to test.

We forget the definition of exploratory testing especially the “test design” part

“Simultaneous test design, execution and learning” 
 (Exploratory Testing explained)
If you did not make any notes of future things to test when carrying out the test execution phase then you may soon have no job to do!  This is an important aspect of exploratory testing and one in which you need to remain focused on when testing.

If you do have a list of ideas, this is the time to use some more critical thinking and see which ideas have value within the context of the project you are currently testing.  You can if you like give each idea a priority and a risk value if this helps your critical thinking of the value of the idea. Other ideas that may help to critical evaluate your test ideas is to discuss them other members of the team.  You could also apply some testing framing.  It should be noted that when you are critically thinking your future test ideas your creative thinking side may become active to come up with other novel ideas that you may be able to test.  You should also note these down as well since they could prove to be valuable.


We are not perfect and constantly look for ways in which we can improve.  An aspect of the test analysis is to reflect on what you have done and think critically about what improvements you could make.

The field of social science and ethnographic research can be helpful here and I wrote an article on this. From this article I put together a way that tester can use reflection to help improve testing  An abstract of which can be seen below:

Personal reflection:

  • Could you have done things better if so what? (Both from a personal and testing perspective)
  • Have you learnt new things about the product under test (That are not documented)?
  • Has your view of the product changed for better or for worse? Why has your view changed?

‘Epistemological reflexivity’ (What limits did we hit?)

  • Did your defined tests limit the information you could find about the product?  (Did you need to explore new areas that you had not defined)
  • Could your tests have been done differently? If yes how?
  • Have you run the right tests?
  • If you did things different what do you think you would have found out about the product?
  • What assumptions have you uncovered to be true/false?
  • Did the assumptions you make impede or help your testing?

The University of Plymouth produced an article of critically thinking reflection which has some useful ideas that may help when you come to reflect on your own improvements.  They have also produced a nice critically thinking poster.

 As you can see from the above for the test analysis phase you need mainly to be critical thinking with some creative thoughts since the majority of the work you do during this phase is to reflect on what you have done, need to do and how well have you done it.

The next section will look into the test reporting phase.