Tuesday, 27 August 2013

The ‘Art’ of Software Testing

I was recently in Manchester, England when I came across the Manchester Art Gallery  and since I had some time spare I decided to visit and have a look around.  I have an appreciation for certain styles of art especially artists such as William Blake  and Constable.  During my visit I had a moment of epiphany.  Looking around the different collections, this appeared to be set out in a structured style, with different styles collated together.  Apart from an odd example of the famous “Flower Thrower" artwork by Banksy being placed in the seventeenth century art collection area.  I wondered if this was a deliberate action to cause debate.

What struck me when looking around was the fact that even though there were many similar painting techniques and methods being applied there was no standard size for any of the painting on display.  I looked around and could not find two paintings that appeared to have the same dimensions.  I even started to think if there was a common ratio being used and the so called golden ratio. Looking around quickly I could see some aspects of ratios being used but to my eyes it appeared that even though the artists used similar approaches and techniques they were ‘free’ to use these methods as a guide to producing their masterpieces.

This made me think about the debates in the field of software testing and how we should be taking on board engineering processes and practices.  If this is the case and we try to rein the imagination how are we supposed to hit upon moments of serendipity and be creative? I agree there needs to be structure and some discipline in the software testing world Session based testing takes account of this.

We have common methods and techniques that we can use in software testing but how we apply this should surely be driven by the context of the project.  I believe we need to stop or resist processes and best practices that hinder or suppress innovation and prevent those moments of enlightenment in which deeply hidden ‘bugs’ are discovered. Following pre planned and pre written test steps by rote can be useful but if we all follow the same path how many wonderful things are we going to miss.  I liken it to following a map or a GPS system and not looking around you at the amazing landscape that you are not noticing or appreciating.  In our field of software testing we must allow testers to explore, discover and sometimes find by accident.  We need to stop forcing processes, best practices and standards upon something which is uniquely human the skill of innovation and discovery.

The title of this article is homage to one of the first books I read about software testing by Glenford Myers – The Art of Software Testing


Tuesday, 20 August 2013

Tis the Season for Conferencing

It will soon be the start of the main software testing conference season and there are many people who will not be able to attend for lots of reasons.  So if you cannot attend in person then why not use social media to follow what is happening at the conference.  The best way I have found to do this is to use twitter and a hash tag designated for the conference.  I personally use twitter a great deal when attending conferences even going so far to automate my tweets when presenting at Lets test.   So I have gathered together as many hash tags I can think of for upcoming testing conferences so you can be virtually there.

If there are any I have missed please add them as comments and I will add them to this article giving credit of course.

For those who are attending a testing conference can I ask that you use twitter to keep others informed about the conference and some of the key points being said it helps the whole of the testing community.

For those organising a testing conference please make sure you have a hash tag for your conference and make it widely known.  There are some conferences organisers that do not have this in place and it is shame since it is way of drawing attention to your conference and for sharing knowledge beyond the confines of a physical location. It is also good to keep it the same each year instead of adding the year on that way it can be used all the time over a year and keep conversations going.

The following is a list of hash tags for upcoming testing conferences for which I could locate a hash tag for.

  • #CAST2013 - CAST –  - 26-28th August 2013 - Madison, WI, USA
  • #STARWESTSTARWEST -   29 Sept – 4 Oct 2013 Anaheim, California, USA
  • #testbash Test Bash - / 28th March 2014 – Brighton, UK
  • #letstest Lets Test OZ - 15-17th September 2014, Gold Coast Australia

Also there are testing events being organised that do not require you to pay.


If there is not one near you, why not organise one?

For those attending a testing conference I would recommending reading the excellent guide to attending a conference written by Rob Lambert (@Rob_Lambert)

Friday, 2 August 2013

Stop Doing Too Much Automation

When researching my article on testing careers for the Testing Planet a thought stuck me about the amount of people who indicated that ‘Test’ Automation was one of the main learning goals of many of the respondents.  This made me think a little about how our craft appears to be going down a path that automation is the magic bullet that can be used to resolve all the issue we have in testing.
I have had the idea to write this article floating around in my head for a while now and the final push was when I saw the article by Alan Page (Tooth of the Angry Weasel) - Last Word on the A Word  in which he said much of what I was thinking. So how can I expand on what I feel is a great article by Alan?
The part of the article that I found the most interesting was the following:

“..In fact, one of the touted benefits of automation is repeatability – but no user executes the same tasks over and over the exact same way, so writing a bunch of automated tasks to do the same is often silly.”

This is similar to what I want to write about in this article.  I see time and time again dashboards and metrics being shared around stating that by running this automated ‘test’ 1 million times we have saved a tester running it manually 1 million times and therefore if the ‘test’ took 1 hour and 1 minute to run manually and 1 minute to run automated it means we have saved 1 million hours of testing.  This is so tempting and to a business who speaks in value and in this context this mean costs.  Saving 1 million hours of testing by automating is a significant amount of cost saving and this is the kind of thing that business likes to see a tangible measure that shows ROI (Return on Investment) for doing ‘test’ automation.  Worryingly this is how some companies sell their ‘test’ automation tools.

If we step back for a minute and go back and read the statement by Alan.  The thing that most people who state we should automate all testing talk about the repeatability factor.  Now let us really think about this.  When you run a test script manually you do more than what is written down in the script.  You think both critically and creatively , you observe things far from the beaten track of where the script was telling you to go.  Computer see in assertions, true or false, black or white, 0 or 1, they cannot see what they are not told to see.  Even with the advances in artificial intelligence it is very difficult for automation systems to ‘check’ more than they have been told to do.  To really test and test well you need a human being with the ability to think and observe.   Going back to our million times example.  If we ran the same test a million times on a piece of code that has not changed the chances of find NEW issues or problems remains very slim however running this manually with a different person running it each time our chances of finding issues or problems increases.  I am aware our costs also increase and there is a point of diminishing returns.  James Lyndsay has talked about this on his blog in which he discusses the importance of diversity   The article also has a very clever visual aid to demonstrate why diversity is important and as a side effect it helps to highlight the points of diminishing return. This is the area that the business needs to focus on rather than how many times you have run a test.

My other concern point is the use of metrics in automation to indicate how many of your tests have you or could you automate.  How many of you have been asked this question?  The problem I see with this is what people mean by “How many of your tests?”  What is this question based upon?  Is it...
  • all the tests that you know about now?
  • all possible tests you could run?
  • all tests you plan to run?
  • all your priority one tests?
The issue is that this is a number that will constantly change as you explore and test the system and learn more. Therefore if you start reporting it as a metric especially as a percentage it soon becomes a non-valuable measure which costs more to collect and collate than any benefit it may try to imply.  I like to use the follow example as an extreme view.

Manager:  Can you provide me a % of all the possible tests that you could run for system X that you could automate.
Me:  Are you sure you mean all possible tests?
Manager: Yes
Me: Ok, easy it is 0%
Manager:  ?????

Most people are aware that testing can have an infinite amount of tests even for the most simple of systems so any number divided by infinity will be close to zero, therefore the answer that was provided in the example scenario above.  Others could argue that we only care about of what we have planned to do how much of that can be automated, or only the high priority stuff and that is OK, to a point, but be careful about measuring this percentage since it can and will vary up or down and this can cause confusion.  As we test we find new stuff and as we find new stuff our number of things to test increases.

My final worry with ‘test’ automation is the amount of ‘test’ automation we are doing (hence the title of this article) I seen cases where people automate for the sake of automation since that is what they have been told to do.  This links in with the previous statement about measuring tests that can be automated.   There need to be some intelligence when deciding what to automate and more importantly what not to automate. The problem is when we are being measured by the number of ‘tests’ that we can automate human nature will start to act in a way that makes us look good against what we are being measured. There are major problems with this and people stop thinking about what would be the best automation solution and concentrate on trying to automate as much as they can regardless of costs. 

What ! You did not realise that automation has a cost?  One of the common problems I see when people sell ‘test’ automation is they conveniently or otherwise forget to include the hidden cost of automation.  We always see the figures of the amount of testing time (and money) being saved by running this set of ‘tests’ each time.  What does not get reported and very rarely measured is the amount of time maintaining and analysing the rests from ‘test’ automation.  This is important since this is time that a tester could be doing some testing and finding new information rather than confirming our existing expectations.  This appears to be missing whenever I hear people talking of ‘test’ automation in a positive way.  What I see is a race to automate all that can be automated regardless of the cost to maintain. 

If you are looking at implementing test automation you seriously need to think about what the purpose of the automation is.  I would suggest you do ‘just enough’ automation to give you confidence that it appear to work in the way your customer expects.  This level of automation then frees up your testers to do some actual testing or creating automation tools that can aid testing.  You need to stop doing too much automation and look at ways you can make your ‘test’ automation effective and efficient without it being a bloated, cumbersome, hard to maintain monstrosity (Does that describe some peoples current automation system?)  Also automation is mainly code so should be treated the same as code and be regularly reviewed and re-factored to reduce duplication and waste.

I am not against automation at all and in my daily job I encourage and support people to use automation to help them to do excellent testing. I feel it plays a vital role as a tool to SUPPORT testing it should NOT be sold on the premise that that it can replace testing or thinking testers.

Some observant readers may wonder why I write ‘test’ in this way when mentioning ‘test’ automation.  My reasons for this can be found in the article by James Bach on testing vs. checking refined.