Wednesday, 16 October 2013

Blog Action Day - Human Rights

Today is Blog Action Day and the theme for this year is Human Rights.

More details of this great cause can be found here: http://blogactionday.org/#

If you want to support this event and have a twitter account then use the hashtag #BAD2013 #humanrights twitter handle @blogactionday.  Please tweet to give awareness to this cause.

I came across the following Poster produced by Zen Pencils and thought it was such a fantastic example of the meaning of human rights.  I hope he does not mind but I have used the image on my blog here since it far surpasses anything I could ever have come up with.  Please visit his site and support him as an artist.

(C) http://zenpencils.com/comic/134-the-universal-declaration-of-human-rights/
Please click on the link to see a high resolution version

Further reading on human rights and how you can be involved
Human Rights Day 10th December 2013


Tuesday, 15 October 2013

Are you ‘Checking’ or ‘Testing’ (Exploratory) Today?

Do you ask yourself this question before you carry out any test execution?

If not then this article is for you.  It starts with a brief introduction about the meaning of checking and testing in the context of exploration and then asks the reader to think about the question then evaluate the testing they are doing and determine from their answer if what they are doing has the most value.

There have been many discussions within the testing community about what the difference is between ‘checking’ and ‘testing’ and how it fits within the practice of test execution.

Michael Bolton started the debate with his article from 2009 on ‘testing vs checking’ in which he defined them as such:

  • Checking Is Confirmation
  • Testing Is Exploration and Learning
  • Checks Are Machine-Decidable; Tests Require Sapience

At that time there were some fairly strong debates on this subject and in the main I tended to agree with the distinctions Michael made between ‘checking’ and ‘testing’  and used this in my approach when testing.

James Bach, working with Michael, then came along with another article to refine the definitions in his article ‘Testing and Checking refined.

  • Testing is the process of evaluating a product by learning about it through experimentation, which includes to some degree: questioning, study, modelling, observation and inference.
  • (A test is an instance of testing.)
  • Checking is the process of making evaluations by applying algorithmic decision rules to specific observations of a product.
  • (A check is an instance of checking.)

From this they stated that there are three types of checking

  • Human checking is an attempted checking process wherein humans collect the observations and apply the rules without the mediation of tools.
  • Machine checking is a checking process wherein tools collect the observations and apply the rules without the mediation of humans.
  • Human/machine checking is an attempted checking process wherein both humans and tools interact to collect the observations and apply the rules.

The conclusion to me appeared to be that checking is a part of testing but we need to work out which would be best to use for the checking part is a machine better or a human?  This question leads to the reason for putting this article together.

James on his website produced a picture to aid visualisation of the concept:


© - James Bach - Testing and Checking refined

Since in my world checking forms a part of testing as I have interpreted the article by James we therefore need to pause for a moment and think about what we are really doing before performing any testing.

We need to ask ourselves this question:

Are we ‘checking’ or ‘testing’ (exploratory) today?

Since both form a part of testing and both could, depending on the context, be of value and importance to the project it is vital that we understand what type of testing we are doing.  If we ask ourselves that question and our answer is ‘checking’ we now need to find out what type of checking we are doing.  If the checking that is being performed falls under the category of machine checking then we need to think and find out why we are doing this as a human thinking person rather than getting a machine to perform it.  Things that could fall under this category could be validation or verification of requirements or functions in which we know what the answer will be.  If this is the case then you need to ask

WHY:

  • are you doing this?
  • is a machine not doing this?
  • can a machine not do this for me?

The problem with a person carrying out manual machine checking is that is uses valuable testing time that could be used to discover or uncover information that we do not know or expect. * When people working in software development talk about test automation this is what I feel they are talking about.  As rightly stated by James and Michael there are many other automation tools that testers can use to aid their exploratory testing or human checking which could be classified as test automation.

So even though checking is a part of testing and can be useful, it may not be the best use of your time as a tester.  Sometimes there are various reasons why testers carry out machine checking such as:

  • Too expensive
  • Too difficult to automate
  • No time
  • Lack of skills
  • Lack of people
  • Lack of expertise. 

However if all you are doing during test execution is machine checking then what useful information are you missing out on finding? 

If we go back to the title of this article are you ‘checking’ or ‘testing’ today?

You need to make sure you ask yourself this question each time you test and evaluate which you feel would have most value to the project at that time.  We cannot continue to have the same ‘must run every test check manually’ mentality since this only addresses the stuff we know and takes no account of risk or priority of the information we have yet to discover.

The information that may be important or useful is hidden in the things we do not expect or currently do not know about. To find this we must explore the system and look for the interesting details that are yielded from this type of testing. (Exploratory)

I will leave you with the following for when you are next carrying out machine checking:

..without the ability to use context and expectations to “go beyond the information given,” we would be unintelligent in the same way that computers with superior compututional capacity are unintelligent.
J. S. Bruner (1957) Going beyond the information given.


Further reading






Friday, 11 October 2013

Believing in the Requirements

Traditionally in testing there has been a large amount of emphasis placed upon ‘testing’ ‘checking’ the requirements.  An article by Paul Holland on Functional specification blinders  and my currently reading of Thomas Gilovich excellent book on How we know what isn’t so has made me re-think this strategy from a psychological perspective. I feel Paul was on the right track with his suggestions of not using the requirements/specification to guide your creative test idea generation but looking at alternatives.  However even these alternatives could cause limitations in your thinking and creative ideas due to the way we think.
The problem we have is that once we have been presented with any information our inbuilt beliefs start to play their part and look at any information with a bias slant.  We at built to look for confirmations that match our beliefs in other words we look for things we want to believe in.  So if believe the implementation is poor or the system under test has been badly designed we will look for things that confirm this and provide evidence that what we believe is true.  We get a ‘buzz’ when we get a ‘yes’ that matches our beliefs.  The same could apply when looking through the requirements we start to find things that matches our beliefs and at the same time the requirements (especially if ambiguous) start to influence our beliefs so that we, as Paul discovered, only look for confirmations of what is being said.  Once we have enough information to satisfy our beliefs we then stop and feel that we have done enough.
The other side of this is that any information that goes against our beliefs makes us dig deeper and look for ways to discount the evidence that is against what we believe.  When faced with evidence that is against what we believe we want to find ways to discount this information and find flaws in it.  The issue is that if we are looking at requirements or specification then normally there is not much that goes against our initial beliefs due to the historic influence that these documents can have.  So we normally do not get to the stage of digging deeper into the meaning of these documents.
As Thomas Gilovich stated
People’s preferences influence not only the kind of information they consider, but also the amount they examine.
If we find enough evidence to support our views then normally we are satisfied and stop.  This limits our scope for testing and being creative. My thoughts on how to get around this apart from following the advice Paul gives is one of being self-critical and questioning oneself.
When we are in a confirming our beliefs mode we are internally asking ourselves the following question
 “Can I believe this?”
Alternatively when we find information that does not match or confirm our beliefs we internally ask ourselves the following question
“Must I believe this?”
These questions are taken from the book by Thomas Gilovich referenced earlier and in this Gilovich states
The evidence required for affirmative answers to these two questions are enormously different.
Gilovich mentions that this is a type of internally framing we do at a psychological level, after reading this it reminded me to go back and read the article by Michael Bolton on Test Framing in which I attended a tutorial at the Eurostar Test Conference . I noted within the article by Michael that there appeared, IMO, a lot of proving the persons beliefs rather than disproving.  In other words many of the examples were answering the “Can I believe this” question.  This is not wrong and is a vital part of testing and I use the methods described by Michael a great deal in my day to day work.  I wonder if this topic could be expanded a little by looking at the opposite and trying to disprove your beliefs, in other words asking the “Must I believe this?” questions.
So moving forward I believe that we can utilize our biases here to our advantage to become more creative in our test ideas.  To do this we need to look at ways to go against what we belief is right and think more negatively.  The next time you look at a requirements or specification document ask yourself the following:
“MUST I BELIEVE THIS”
And see where this leads you.

PS – this article is a double edged sword – if you read this article you should now be asking “Must I believe this?”

Tuesday, 17 September 2013

The Size of the Automation Box

I have previously written an article about doing too much automation and have recently been involved in successfully implementing automation and exploratory testing in agile environments.  During discussions about how to implement the automation framework a work colleague made the following statement in regards to defining the automation boundaries:
“Defining the size of the black box is critical”.  
 It made me think about the importance of knowing where your automation boundaries are and how critical it is for successful deployment of an automation framework.   One point to keep in mind when you try and implement an automation framework is the principal of K.I.S.S (‘Keep it simple stupid’)
I have seen (and been involved in) several failed automation implementations and in retrospect the main reason for failure has been trying to automate everything that you can possibly think to automate. I discussed this problem in the article too much automation and the importance of thinking critically about what to automate. Time and time again people go rushing ahead to try and automate everything they can without thinking about the boundaries of what their automation framework should be covering. 
 So what is the solution? The rest of this article will offer some guidelines that may help you in your quest of implementing a successful automation framework
The first thing to look at when trying to implement automation is the context of the testing you are carrying out or attempting to automate.  Are you involved in sub system testing, component testing, end to end testing, customer driven testing or any other type of testing?  It is important to look at this and found how what do you want the automation to do for you.  
  • At Customer level you may want to automate the common customer scenarios.
  • At a system level you may want to validate the input and outputs to a specific set of components and stub out the E2E system.
So defining your context of the purpose of your testing from an automation viewpoint is important.  The reason for this is that links to the second point, defining the size, or scope, of your automation for the project.  Most automation implementations fail to take into account the limits of what you intend to cover with the automation.  As my colleague put it 
The size of this will change depending on the context of the type of testing that you are carrying out.  For your automation implementation to be successful it is important that you define upfront your boundaries of what the automation is expected to cover.  Automation should be about what you input into the system and what you expect as an output everything in-between this is your black box .  So knowing what your testing context is should help define the size and limits of your black box knowing this information before you start to implement a test automation framework should greatly improve the chances of your automation implementation being successful.
Example of defining black box at a Component level of testing
    
Shaded Gray Area = Black box
For this example the boundaries would be
  • Create automated test inputs for Input A in relation to defined requirements and expectations
  • Automate expected outputs Output 1 and Output 2 based upon requirements and your expectations
  • What happens inside of Component X is of no interest since this is outside the boundaries of what we are automating

Another example of defining black box at a Component level of testing
    
  • Create automated test inputs for Input B in relation to defined requirements and expectations
  • Automate expected outputs Output 1, Output 2, Output 3 and Output 4 based upon requirements and your expectations
  • What happens inside of Component Y is of no interest since this is outside the boundaries of what we are automating
Example of defining black box at a Product level of testing
    
  • Create automated test inputs for Input A and Input B in relation to defined requirements and expectations
  • Automate expected outputs Output 1, Output 2, Output 3 and Output 4 based upon requirements and your expectations
  • What happens inside of Component Y is of no interest since this is outside the boundaries of what we are automating
I am aware that this appears to be common sense but time and time again people fail to take on board the context of what they expect the automation to be covering.
So next time you are involved in implementing an automation framework remember to define the size of your black box before you start your implementation that way you know the boundaries of what your automation will cover and check for you.

Tuesday, 27 August 2013

The ‘Art’ of Software Testing

I was recently in Manchester, England when I came across the Manchester Art Gallery  and since I had some time spare I decided to visit and have a look around.  I have an appreciation for certain styles of art especially artists such as William Blake  and Constable.  During my visit I had a moment of epiphany.  Looking around the different collections, this appeared to be set out in a structured style, with different styles collated together.  Apart from an odd example of the famous “Flower Thrower" artwork by Banksy being placed in the seventeenth century art collection area.  I wondered if this was a deliberate action to cause debate.

What struck me when looking around was the fact that even though there were many similar painting techniques and methods being applied there was no standard size for any of the painting on display.  I looked around and could not find two paintings that appeared to have the same dimensions.  I even started to think if there was a common ratio being used and the so called golden ratio. Looking around quickly I could see some aspects of ratios being used but to my eyes it appeared that even though the artists used similar approaches and techniques they were ‘free’ to use these methods as a guide to producing their masterpieces.

This made me think about the debates in the field of software testing and how we should be taking on board engineering processes and practices.  If this is the case and we try to rein the imagination how are we supposed to hit upon moments of serendipity and be creative? I agree there needs to be structure and some discipline in the software testing world Session based testing takes account of this.

We have common methods and techniques that we can use in software testing but how we apply this should surely be driven by the context of the project.  I believe we need to stop or resist processes and best practices that hinder or suppress innovation and prevent those moments of enlightenment in which deeply hidden ‘bugs’ are discovered. Following pre planned and pre written test steps by rote can be useful but if we all follow the same path how many wonderful things are we going to miss.  I liken it to following a map or a GPS system and not looking around you at the amazing landscape that you are not noticing or appreciating.  In our field of software testing we must allow testers to explore, discover and sometimes find by accident.  We need to stop forcing processes, best practices and standards upon something which is uniquely human the skill of innovation and discovery.

The title of this article is homage to one of the first books I read about software testing by Glenford Myers – The Art of Software Testing


Tuesday, 20 August 2013

Tis the Season for Conferencing

It will soon be the start of the main software testing conference season and there are many people who will not be able to attend for lots of reasons.  So if you cannot attend in person then why not use social media to follow what is happening at the conference.  The best way I have found to do this is to use twitter and a hash tag designated for the conference.  I personally use twitter a great deal when attending conferences even going so far to automate my tweets when presenting at Lets test.   So I have gathered together as many hash tags I can think of for upcoming testing conferences so you can be virtually there.

If there are any I have missed please add them as comments and I will add them to this article giving credit of course.

For those who are attending a testing conference can I ask that you use twitter to keep others informed about the conference and some of the key points being said it helps the whole of the testing community.

For those organising a testing conference please make sure you have a hash tag for your conference and make it widely known.  There are some conferences organisers that do not have this in place and it is shame since it is way of drawing attention to your conference and for sharing knowledge beyond the confines of a physical location. It is also good to keep it the same each year instead of adding the year on that way it can be used all the time over a year and keep conversations going.

The following is a list of hash tags for upcoming testing conferences for which I could locate a hash tag for.

  • #CAST2013 - CAST –  - 26-28th August 2013 - Madison, WI, USA
  • #STARWESTSTARWEST -   29 Sept – 4 Oct 2013 Anaheim, California, USA
  • #testbash Test Bash - / 28th March 2014 – Brighton, UK
  • #letstest Lets Test OZ - 15-17th September 2014, Gold Coast Australia

Also there are testing events being organised that do not require you to pay.


If there is not one near you, why not organise one?

For those attending a testing conference I would recommending reading the excellent guide to attending a conference written by Rob Lambert (@Rob_Lambert)

Friday, 2 August 2013

Stop Doing Too Much Automation

When researching my article on testing careers for the Testing Planet a thought stuck me about the amount of people who indicated that ‘Test’ Automation was one of the main learning goals of many of the respondents.  This made me think a little about how our craft appears to be going down a path that automation is the magic bullet that can be used to resolve all the issue we have in testing.
I have had the idea to write this article floating around in my head for a while now and the final push was when I saw the article by Alan Page (Tooth of the Angry Weasel) - Last Word on the A Word  in which he said much of what I was thinking. So how can I expand on what I feel is a great article by Alan?
The part of the article that I found the most interesting was the following:

“..In fact, one of the touted benefits of automation is repeatability – but no user executes the same tasks over and over the exact same way, so writing a bunch of automated tasks to do the same is often silly.”

This is similar to what I want to write about in this article.  I see time and time again dashboards and metrics being shared around stating that by running this automated ‘test’ 1 million times we have saved a tester running it manually 1 million times and therefore if the ‘test’ took 1 hour and 1 minute to run manually and 1 minute to run automated it means we have saved 1 million hours of testing.  This is so tempting and to a business who speaks in value and in this context this mean costs.  Saving 1 million hours of testing by automating is a significant amount of cost saving and this is the kind of thing that business likes to see a tangible measure that shows ROI (Return on Investment) for doing ‘test’ automation.  Worryingly this is how some companies sell their ‘test’ automation tools.

If we step back for a minute and go back and read the statement by Alan.  The thing that most people who state we should automate all testing talk about the repeatability factor.  Now let us really think about this.  When you run a test script manually you do more than what is written down in the script.  You think both critically and creatively , you observe things far from the beaten track of where the script was telling you to go.  Computer see in assertions, true or false, black or white, 0 or 1, they cannot see what they are not told to see.  Even with the advances in artificial intelligence it is very difficult for automation systems to ‘check’ more than they have been told to do.  To really test and test well you need a human being with the ability to think and observe.   Going back to our million times example.  If we ran the same test a million times on a piece of code that has not changed the chances of find NEW issues or problems remains very slim however running this manually with a different person running it each time our chances of finding issues or problems increases.  I am aware our costs also increase and there is a point of diminishing returns.  James Lyndsay has talked about this on his blog in which he discusses the importance of diversity   The article also has a very clever visual aid to demonstrate why diversity is important and as a side effect it helps to highlight the points of diminishing return. This is the area that the business needs to focus on rather than how many times you have run a test.

My other concern point is the use of metrics in automation to indicate how many of your tests have you or could you automate.  How many of you have been asked this question?  The problem I see with this is what people mean by “How many of your tests?”  What is this question based upon?  Is it...
  • all the tests that you know about now?
  • all possible tests you could run?
  • all tests you plan to run?
  • all your priority one tests?
The issue is that this is a number that will constantly change as you explore and test the system and learn more. Therefore if you start reporting it as a metric especially as a percentage it soon becomes a non-valuable measure which costs more to collect and collate than any benefit it may try to imply.  I like to use the follow example as an extreme view.

Manager:  Can you provide me a % of all the possible tests that you could run for system X that you could automate.
Me:  Are you sure you mean all possible tests?
Manager: Yes
Me: Ok, easy it is 0%
Manager:  ?????

Most people are aware that testing can have an infinite amount of tests even for the most simple of systems so any number divided by infinity will be close to zero, therefore the answer that was provided in the example scenario above.  Others could argue that we only care about of what we have planned to do how much of that can be automated, or only the high priority stuff and that is OK, to a point, but be careful about measuring this percentage since it can and will vary up or down and this can cause confusion.  As we test we find new stuff and as we find new stuff our number of things to test increases.

My final worry with ‘test’ automation is the amount of ‘test’ automation we are doing (hence the title of this article) I seen cases where people automate for the sake of automation since that is what they have been told to do.  This links in with the previous statement about measuring tests that can be automated.   There need to be some intelligence when deciding what to automate and more importantly what not to automate. The problem is when we are being measured by the number of ‘tests’ that we can automate human nature will start to act in a way that makes us look good against what we are being measured. There are major problems with this and people stop thinking about what would be the best automation solution and concentrate on trying to automate as much as they can regardless of costs. 

What ! You did not realise that automation has a cost?  One of the common problems I see when people sell ‘test’ automation is they conveniently or otherwise forget to include the hidden cost of automation.  We always see the figures of the amount of testing time (and money) being saved by running this set of ‘tests’ each time.  What does not get reported and very rarely measured is the amount of time maintaining and analysing the rests from ‘test’ automation.  This is important since this is time that a tester could be doing some testing and finding new information rather than confirming our existing expectations.  This appears to be missing whenever I hear people talking of ‘test’ automation in a positive way.  What I see is a race to automate all that can be automated regardless of the cost to maintain. 

If you are looking at implementing test automation you seriously need to think about what the purpose of the automation is.  I would suggest you do ‘just enough’ automation to give you confidence that it appear to work in the way your customer expects.  This level of automation then frees up your testers to do some actual testing or creating automation tools that can aid testing.  You need to stop doing too much automation and look at ways you can make your ‘test’ automation effective and efficient without it being a bloated, cumbersome, hard to maintain monstrosity (Does that describe some peoples current automation system?)  Also automation is mainly code so should be treated the same as code and be regularly reviewed and re-factored to reduce duplication and waste.

I am not against automation at all and in my daily job I encourage and support people to use automation to help them to do excellent testing. I feel it plays a vital role as a tool to SUPPORT testing it should NOT be sold on the premise that that it can replace testing or thinking testers.

Some observant readers may wonder why I write ‘test’ in this way when mentioning ‘test’ automation.  My reasons for this can be found in the article by James Bach on testing vs. checking refined.