Sunday, 30 June 2013

Measuring Test Coverage

This article follows on from my previous article on Why we need to explore and looks at when we simplify the constructs of software development to expectations and deliverables how measuring test coverage becomes a difficult task.  I should acknowledge that the model I use here is extremely simplified and is only to used to aid clarification. There are many more factors that are involved especially within the expectations section as Michael Bolton quite rightly commented about in the previous article.

If we go back to our original diagram (Many thanks to James Lyndsay) which shows our expectations and our deliverable and the it where they meet is where our expectation are met by the deliverable.

At a simple level we could then make the following reasonable deduction (at a simplified level)
We can express all our known expectations as 100% and therefore for measurement purposes say that x % of our expectations have been met by the deliverable and y % have not been met.  This gives us a simple metric to measure how much of our expectations have been met.  This seems very clear and could to some people be a compelling measurement to use within testing.    The following diagram gives a visual reference to this.

This is only half the story since on the other side the part where we need to do some exploring and experimentation.  This is the stuff in the deliverable that we do not know or expect.  This is the bread and butter of our testing effort.  The problem is since we do not know what is in this area or how big or small it is (I will return to that point later).  We are now in a measurement discomfort zone, how do we measure what we do not know?  The following diagram shows  a visual representation of this.

This measurement problem is also compounded by the fact that as you explore and discover more about the deliverable you tacit knowledge can become more explicit and your expectations start to grow.  So you end up in the following situation:

Now your expectation percentage is 100%+ and as you explore more continuously growing. So your % of meeting and not meeting your expectations becomes a misleading and somewhat pointless metric.

I was ask if there is anything that could be done to increase the area where the expectations are met by the deliverable and this lead to me adding another diagram as shown below.

**Still not to scale

Since testing in theory could be an infinite activity, how much testing we do before we stop is determined by many factors, Michael Bolton has a superb list as an article here

In summary the amount we know and expect from a piece of software is extremely small in comparison to what we do not know about the software (deliverable) hence my first post in the article series on the need to explore the system to find out the useful information.  We need to be careful when using metrics to measure progress of testing especially when that measurement appears easy to gather.

Further Reading on Metrics and Testing.

Thursday, 27 June 2013

Why we need to explore

This article looks at why within testing we need to carry out exploratory testing.

It is based upon the following diagram that James Lyndsay  drew during a recent diversity workshop.

When we talk about expectations we mean something that is written upfront , or known about before delivery of the product.  For example requirements or specification documentation.  There are other types of expectation that depend on the person such as experiences, domain knowledge or business knowledge.

The deliverable is the product being delivered and is normally the software under test.

When the product is delivered sometimes the expectations are met by deliverable and this is the area shown on the diagram where the ovals overlap and other times they are not.  Where they do overlap we can say that the deliverable meets the expectations in this area.  This could be seen as what we would with the testing world call verifying requirements, user stories, stuff/information that is known or that we know about.

**Note the area where expectation meets deliverable is not to scale or representative of a real world situation it is for clarification purposes only

The expectations to the left of the overlap are where our expectations are not met by the deliverable and these could be defined as bugs, defects, issues or a misunderstanding of what was expected.

The stuff to the right of the overlap is stuff in the deliverable that you did not expect and is the stuff you need to use exploratory testing to discover what could be vital and important information to someone who matters.  This is the stuff you do not know about, nor could you have known about until you started to do some testing.

Outside the ovals is an area in which you did not have any expectations nor is in the deliverable and as such is not important with regards to testing for the purpose of this article.

The following diagram is a visual representation of what I wrote above.


With this as a model we can now start to think about what James Bach and Michael Bolton discussed with reference to the terms checking and testing.  I see that for the stuff we know about (the expectations) we should look at covering this with machine checking (automation). This does not mean we should automate ALL expectations we need to base our automation on many factors cost, priority, risk, etc.  (I plan a future article on not doing too much automation). If we do this we should have some confidence that our deliverable meets our expectations.  This then allows testers to start to do some 'testing' and uncover the useful information that exists in the deliverable that no one knows about until some testing is carried out.



This is why IMO the only way we can test is not by following scripts (even though they can be useful to aid some elements of meeting our expectations) but by exploring the deliverable and finding out what it is actually doing that we did not expect it to do.

My next article will extend on this model and revisit measuring test coverage.




Tuesday, 25 June 2013

Tacit and Explicit Knowledge and Exploratory Testing

"We can know more that we can tell". - Michael Polanyi(1966)

Having finally read the excellent book Tacit and Explicit Knowledge by Harry Collins which I personally found to make a significant impact on my thinking of how we learn and record information (knowledge).  The book is not an easy book to read and it took me a few times of re-reading some sections to work out what the author may have meant.  
I will start by saying that this article is based upon my own interpretation of the book and the links I make between what the author writes and what I connect with regarding testing are entirely my own and could be flawed.
So what do we mean when we say tacit and explicit knowledge means?
Harry Collins in his book describes in great detail what he means by these terms and I could not find a clear definition that would be useful for this article.  So I used some of the research references I had used when reading the book.  One of the better ones I came across was the following website 
Explicit Knowledge: Knowledge that is codified and conveyed to others through dialog, demonstration, or media such as books, drawings, and documents.
Tacit Knowledge: Deeply personal experience, aptitudes, perceptions, insights, and know-how that are implied or indicated but not actually expressed — it resides in individuals & teams.
The site also has a great diagram showing visually the difference in the amount of knowledge we have in each type with explicit being quite small part of our total knowledge and tacit being the majority.


 In her study of organisational knowledge L Wah stated the following:
According to Buckman of Buckman Labs,, 90% of the knowledge in any organization is embedded and synthesized in peoples’ heads. Unleashing this knowledge- which is tacit in nature-presents a big challenge. (Wah, 1999b;)
We will revisit this later
Another article I found which tried to define the terms was the following:
Explicit knowledge is formal and systematic. It can be easily communicated and shared. Typically, it has been documented. Articulated knowledge, expressed and recorded as words, numbers, codes, mathematical and scientific formulae, and musical notations. Explicit knowledge is easy to communicate, store, and distribute and is the knowledge found in books, on the web, and other visual and oral means.
Tacit knowledge on the other hand, is not so easily expressed. It is highly personal, hard to formalize and difficult to communicate to others. It may also be impossible to capture. The challenge is to identify which elements of tacit knowledge can be captured and made explicit—while accepting that some tacit knowledge just cannot be captured. For tacit knowledge that cannot be captured, the goal is to connect the possessors of tacit knowledge with the seekers of that knowledge
Another useful link that I came across was this one
Tacit Knowledge is highly personal and hard to formalize, making it difficult to communicate or share with others.
Explicit Knowledge is codified knowledge that can be transmitted in formal, systematic language.
With this in mind I started to look at how it applied to testing, immediately I started to see a relationship between exploratory and scripted testing. My research found that ‘The test eye people’ have talked about this here.   I highly recommend reading this since it goes much deeper into the kinds of tacit knowledge and how that relates to testing than I do in this article.
My thoughts then turned to how it relates to our everyday testing job and I found that the information we know and can write down works well for scripted testing.  This is what explicit knowledge implies, if we can document it (codify) it then we can script it.  However testing is about testing the information we do not know or cannot explain (the hidden stuff) to do this we have to use tacit knowledge (skills, experience, thinking) we need to experience it to be able to work it out.  This is what is meant by tacit knowledge – learn it by doing it rather than reading.
Harry Collins stated that all explicit knowledge comes from tacit and I agree with this.  Once we have done some exploratory testing and documented the tacit knowledge becomes explicit.  However this accounts for a very small portion of the whole wealth of our tacit knowledge and as we do exploratory testing we uncover more areas that will require our tacit knowledge skills to be used.
One interesting point that is that tacit knowledge can become explicit as we communicate and socialise more.  As our society evolves and our ability to put into words our thinking and thoughts, more of our tacit knowledge becomes explicit.  This appears to have happen over the lifetime of existence on the planet.  We started with primitive sketches and marks which involved into handwriting.  We appear to be going back to using sketching as a tool to make knowledge explicit .  Are we going back to our historic past to help turn tacit into explicit?  As history appears to show as we become better at documenting and writing down our thoughts and thinking, our ability to turn more of our tacit knowledge into explicit knowledge becomes easier.   
Ikujior Nonaka and Hirotaka Takeeuchi talk about how to turn our tacit knowledge into explicit knowledge in their book – The Knowledge Creating Company – How Japanese companies create the dynamics of Innovation.  They explain  the use of the “Knowledge conversation process and the knowledge spiral details of this can be found in this article. .  Their explanation of the knowledge conversation process diagram is reproduced below:
  1. Tacit-to-tacit (socialisation) - individuals acquire knowledge from others through dialogue and observation
  1. Tacit-to-explicit (externalisation) - the articulation of knowledge into tangible form through elicitation and documentation
  1. Explicit-to-explicit (combination) - combining different forms of explicit knowledge, such as that in documents or databases
  1. Explicit-to-tacit (internalisation) - such as learning by doing, where individuals internalise knowledge into their own mental models from documents.


If most of our knowledge is tacit (90%) then it would make sense to ensure that software development processes were aligned to where the majority of our knowledge lies.  This would indicate that the majority of our testing effort should be spent on using the exploratory testing approach to increase our explicit knowledge.  Currently within the industry we seem to be spending a lot of our testing time utilising our explicit knowledge by means of requirements coverage, test automation checks, scripting without realising that we are only using 10% of our total knowledge capacity.  In any business utilising 10% of any resource would be seen as commercial suicide, yet this practise is being carried out across the majority of the software testing profession and supported by many testing organisations in the way they ‘certify’ their testers.  People need to be made more aware of the way in which human beings store and use their knowledge.  It is not by documentation but by doing, immersing, taking part, practising, experimenting.  It is time we started to recognise that testing is a tacit activity and requires testers to think both creativity and critically.
I feel that the way in which we use and share knowledge is going to become more important within the testing world and many others within the testing community are talking about the subject of tacit and explicit knowledge. I have already mentioned within the article a post by the test eye people ,  Michael Bolton posted an article about this subject recently  in turn which was  based upon a future prediction post Michael wrote in  2011.  Simon Morley talked about using peer conference to help turn tacit knowledge into explicit knowledge.  Marcus Gartner has written a great post about how automaton can help turn tacit into explicit and why exploratory testing is still vital. The Eurostar testing conference in Sweden this year has an underlying social science style feel to it especially with Harry Collins being one of the keynote speakers.  Jeff Lucas posted an article about the relationship between tacit and explicit knowledge and testing on the software testing club website.  I feel it may be prudent of anyone involved in testing to learn more about tacit and explicit knowledge and how you can use your understanding of this to help you add value to your team, project and organisation.
This is an exciting time within software development  and one in which testers with their ability and skill in turning tacit into explicit knowledge already have a lead and the way we utilise this expertise and ability will determine the success of the project or the organisation we are working within.

Monday, 24 June 2013

Testing Your Career Path

I have had an article published on the Test Planet website

http://www.ministryoftesting.com/2013/06/testing-your-career-path/

It is a look at the perception of testing and what options testers have with regards to progressing their career.

Monday, 10 June 2013

Test Ideas Cue Cards

How often when we are testing do we get stuck or cannot think of new and novel ways to exercise the system?

After my recent set of posts on creative and critical thinking I started to think more and more about how we can improve our creative thinking.  So I decided to put together a bunch of cards that may help to improve your creative thinking using a mixture of quotes, pictures and heuristics testing cheats sheets. I cannot take full credit for the concept and idea since the idea is based upon some initial work by Karen Johnson of testing mnemonics as a card deck, many thanks to @karennjohnson for the ideas and for allowing to use them.

Some of the uses I have come up for them include:

  • Test planning sessions
  • Test interviews
  • When using exploratory testing and need a quick boost to aid creativity in your test 
  • In the test lab at a testing conference
  • Brainstorming sessions
I am sure you could come up with many more uses and if you do please let myself and Karen know.


The file contains the cards can be found as a PDF file using the following link: https://docs.google.com/file/d/0B8jc_cHKwbNocGdCM2JZdi03ZjQ/edit?usp=sharing

Added a word doc link for those who wish to edit the cards. (thanks to Duncan for the suggestion)



Wednesday, 22 May 2013

Information Overload and Bad Decisions


This blog article is based upon my conference talk  at the Lets Test conference in Sweden.

Before we start to look at why too much information  is bad it might be worth defining what Information Overload actually means.

According to  D. Allen and T. D. Wilson, “Information overload: Context and causes,” - The New Review of Information Behaviour Research Volume 4, Issue 1, 2003
Information overload occurs when the information available exceeds  the user's ability to process it 
Research has shown that as human beings we have a limited amount of capacity for storing information within our brains.
The human brain is quite remarkable.  It can store perhaps three terabytes of information.
http://www.sizes.com/people/brain.htm
And yet this is only about one millionth of the information that IBM say is now being produced in the world each day (and growing)
The signal and the Noise – Nate Silver
The problem is that too much information can have a serious affect on human beings.  The term 'Information Overload' appears to be credited to Alvin Toffler in his book 'Future Shock' In which he says the following about the problems of overloading.
It has been seen that Overloading leads to a serious breakdown of performance (sometimes with dangerous results). We are, in other words forcing them to process information at a far more rapid pace than was necessary in slowly-evolving societies. There can be little doubt that we are subjecting at least some of them to cognitive over stimulation.  What consequences this may have for mental health in the techno-societies has yet to be determined.
You may think that this is a 'new' problem, Alvin wrote the above passage of text in 1972!


Look at the image above what do you see?  Butterfly?  Tree?  Stars? It is a randomly created piece of art.

We are good at seeing patterns and really good at making patterns from things where no patterns exists.  We do the same with numbers and patterns of numbers, lucky streak.  This within testing can cause us problems and unless we use critically thinking our own ancient survival skills will let us down.
 .. your brain hates ambiguity and is willing to take shortcuts to remove it from any situation.  If there is nothing else to go on, you will use what is available.  When pattern recognition fails, you create patterns of your own 
You Are Not So Smart - David McRaney
Statistics Joke:
Did you hear about the statistician who drowned whilst crossing a river which had an average depth of 3 foot. 
We get mislead and enticed within testing by the use of numbers, especially the use of pass/fail metrics and many others without thinking that numbers need a story.  Once you have as story you can use numbers to help backup the story.  Telling the story of numbers is vital. Using only numbers to measure will ensure that you end up measuring the wrong thing!

I know Alan Page loves the Gorilla so I included it in my talk.

If you already seen the Gorilla video or know about it I still recommend you watch the following video.

http://www.youtube.com/watch?v=IGQmdoK_ZfY

Did it still mange to fool you?  Even when people have seen it they still miss the unexpected.  We find it difficult to do two things at once.  When we try to do this we lose some valuable learning - multi-thinking is a myth.  More information on this can be found in this interesting article. http://blogs.kqed.org/mindshift/2013/05/how-does-multitasking-change-the-way-kids-learn/

The next challenge I set was to ask the following question - now try to do t his quickly before moving on to the next paragraph:
How many animals did Moses take into the Ark?


Did you try and find an answer?  Or did you question the question (think critically).  If you still see nothing wrong with the question read it again.

There is a danger as testers that we can easily be primed and anchoring into thinking that we need to give an answer or solve a problem.  This leads into a discussion about what is the role of testers.  To find or solve problems? I have slightly touch on this subject previously.

The other way in which we get anchoring into thinking in one way is by an over reliance on requirement documentation. Instead of playing and using the software we try to second guess and base our testing on a set of requirements that then controls and leads our testing effort.  Rikard Edgren talks more about this in his article on the search of the potato.  Do not only rely on requirement documents we need to ask what more is there?  We need to think about what the requirement may not be saying

So how does all of the above relate to testing?

We base our testing decisions on our biases and “quick” judgement , what we think we believe, we do not take the time to think We create patterns from our tests where no patterns may exist – we are built to spot patterns and sometimes this is useful other times it can mislead and waste time and lead to wrong choices being made. We by design follow the path of least resistance with our thinking – we are lazy. We make irrational quick choices that we are unaware we are making.  Sometimes we do things without thinking, this is useful for driving etc but not when testing we need to reflect, refocus and re-frame.

When we are faced with too much information we may not be able to make decisions or even make the wrong decision. For example soldiers freezing in a conflict situation or panic on plane which has crashed and on fire.  We can become overwhelmed and cannot make a choice so we make no decision and await our fate

Information: The very thing that makes it possible to be an engineer is threatening our ability to do our work. – IEEE Spectrum

We constantly fail to apply critical thinking to our testing, we are so overwhelmed that we forget to ask questions such as:
  • Why are you doing what you are doing?  
  • Could you be doing something better?  
  • Is this the most important thing you could be doing?

This leads on to the James Bach approach to critical thinking in testing
  • Huh? – Do I really understand?
  • Really? How do I know what you say is true?
  • So? Is that the only solution?
You do not have to use this example be creative and create your own that is thinking!

“Rule of Three” – If you haven't thought of at least three plausible explanations, you’re not thinking critically enough - Gerald Weinberg - Quality Software Management Volume 2

So what can we do to help this overloading of information?
  • Slow down and think, we are going far to fast. We need time to pause and  to reflect I have written about slowing down previously.
  • We need to remember that creative thinking is just as important and we sometimes need to take a step back
  • We need to learn that it is ok to make mistakes and get your assumptions wrong that is the best way to learn.  That is the important part, we must remember to learn from them.
  • If you are aware you are human and can easily be fooled then that can help you improve your thinking and question your own beliefs and motives for what you are doing.
  • Stop doing too much planning far far ahead. We learn a lot more by doing, tinkering and playing, Discover by accident, be creative .  To remove your fallacies and assumptions you need to play with the system and see what it does.
  • Have a passion for testing  and your job (not just testing) passion drives your knowledge and thirst to learn more.
Nothing great has been and nothing great can be accomplished without passion
GWF Hegel


Lets Test Conference Talk 2013 - Resources


I recently had the good fortune to be able to be a speaker at the Lets Test conference in Sweden. - .  My topic for the conference talk was called Information Overload and Bad Decisions  - More details here

Anyone interested in viewing my presentation can use the following link for the SVG version:
https://docs.google.com/file/d/0B8jc_cHKwbNoSE80SnRLdDVzbXM/edit?usp=sharing

I also have a traditional PowerPoint version for those more used to this kind of presentation
https://docs.google.com/file/d/0B8jc_cHKwbNoNzNJVXl0SnBYcVU/edit?usp=sharing

My notes:
https://docs.google.com/file/d/0B8jc_cHKwbNoUktpSjBmcU1JZGc/edit?usp=sharing

For those interested in learning how I created the SVG presentation you can find out more by clicking on the links below.

I used two tools - Inkscape (http://inkscape.org/) and Sozi - http://sozi.baierouge.fr/wiki/en:welcome

My next blog article will be a short summary of the talk.