Showing posts with label test automation. Show all posts
Showing posts with label test automation. Show all posts

Thursday, 17 April 2014

Using Exploratory Testing to Drive Automation

Traditionally automation and exploratory testing have been seen as independent tasks that work in different ways and normally do not appear to be ideal partners. This article suggests that there is value in utilizing the work done when automating to add some exploratory testing effort.

In most cases there is some form of documentation or communication which indicates what the system being designed will do. This can be in the form of requirements, design specification, email, design meetings and so forth. An assumption has been made here that testers are involved in the design phase, if they are not for your project then they should be.

Within the context in which I work these documents/communications are analyzed for automated cucumber scenarios and exploratory charters. For the automation scenarios these are created using the Gherkin format:

Feature: Password management
Scenario: Forgot password
Given a user with email "someuser@example.com" exists
When I ask for a password reset
Then an email with a password reset link should be sent


Exploratory charters are created in the Exploratory Charter format as described by Elisabeth Hendrickson in the book Explore It!:

Explore (target)
With (resources)
To discover (information)


Where:
·                  Target: Where are you exploring
·                  Resources: What resources will you bring with you
·                  Information: What kind of information are you hoping to find?

Once the exploratory testing charters and cucumber scenarios have been initially defined the Test Engineer manually steps through the cucumber scenario to determine the pre-requite components needed to complete the scenario from an automation perspective, at the same time adding details to the Gherkin feature file. This may involve building the environment and integrating the components. It is at this point that some level of exploratory testing can be performed to gain more value from this manual effort. This manual effort can be recorded on a wiki or some other similar tool. The suggestion for using a wiki is the ease of access and use for cross regional teams. It is not mandated for teams to us a wiki but from experience it appears to be the best way to share information.

This will require a mindset shift for the test engineer instead of only focusing on the automation scenario they should be looking for future opportunities to test either automated or manually and these should be fed back to the team and added as backlog items. One other benefit is that as you explore the automation scenario you may uncover issues or defects earlier which are always a good benefit for the project.

Once the step definitions have been coded and implemented for the automation scenario they should feed into whatever automation reporting system you have selected. This should report scenarios pass or fail. It is expected that defects should not be found at this stage and if scenarios fails it is indication that your expectations of what the system does has changed and should be investigated before any possible defects are raised. This is a simple approach to close the link between test automation and exploratory testing and leverage the skills of the test engineers to their full value.

It should be noted that there will still need to be more effort to ensure that the exploratory charters are run manually and recording using the current approaches for your projects.  This then become a continuous cycle as more scenarios are discovered for automation then more exploratory testing can be done and more information about the system can be uncovered.  This approach makes exploratory testing and automation partners rather than as independent entities.  It hopefully reduces the checking vs testing debate and focuses on utilizing the skills of testers to deliver a product that customers will enjoy using.

The diagram below is a simple visually representation of this concept.


Tuesday, 15 October 2013

Are you ‘Checking’ or ‘Testing’ (Exploratory) Today?

Do you ask yourself this question before you carry out any test execution?

If not then this article is for you.  It starts with a brief introduction about the meaning of checking and testing in the context of exploration and then asks the reader to think about the question then evaluate the testing they are doing and determine from their answer if what they are doing has the most value.

There have been many discussions within the testing community about what the difference is between ‘checking’ and ‘testing’ and how it fits within the practice of test execution.

Michael Bolton started the debate with his article from 2009 on ‘testing vs checking’ in which he defined them as such:

  • Checking Is Confirmation
  • Testing Is Exploration and Learning
  • Checks Are Machine-Decidable; Tests Require Sapience

At that time there were some fairly strong debates on this subject and in the main I tended to agree with the distinctions Michael made between ‘checking’ and ‘testing’  and used this in my approach when testing.

James Bach, working with Michael, then came along with another article to refine the definitions in his article ‘Testing and Checking refined.

  • Testing is the process of evaluating a product by learning about it through experimentation, which includes to some degree: questioning, study, modelling, observation and inference.
  • (A test is an instance of testing.)
  • Checking is the process of making evaluations by applying algorithmic decision rules to specific observations of a product.
  • (A check is an instance of checking.)

From this they stated that there are three types of checking

  • Human checking is an attempted checking process wherein humans collect the observations and apply the rules without the mediation of tools.
  • Machine checking is a checking process wherein tools collect the observations and apply the rules without the mediation of humans.
  • Human/machine checking is an attempted checking process wherein both humans and tools interact to collect the observations and apply the rules.

The conclusion to me appeared to be that checking is a part of testing but we need to work out which would be best to use for the checking part is a machine better or a human?  This question leads to the reason for putting this article together.

James on his website produced a picture to aid visualisation of the concept:


© - James Bach - Testing and Checking refined

Since in my world checking forms a part of testing as I have interpreted the article by James we therefore need to pause for a moment and think about what we are really doing before performing any testing.

We need to ask ourselves this question:

Are we ‘checking’ or ‘testing’ (exploratory) today?

Since both form a part of testing and both could, depending on the context, be of value and importance to the project it is vital that we understand what type of testing we are doing.  If we ask ourselves that question and our answer is ‘checking’ we now need to find out what type of checking we are doing.  If the checking that is being performed falls under the category of machine checking then we need to think and find out why we are doing this as a human thinking person rather than getting a machine to perform it.  Things that could fall under this category could be validation or verification of requirements or functions in which we know what the answer will be.  If this is the case then you need to ask

WHY:

  • are you doing this?
  • is a machine not doing this?
  • can a machine not do this for me?

The problem with a person carrying out manual machine checking is that is uses valuable testing time that could be used to discover or uncover information that we do not know or expect. * When people working in software development talk about test automation this is what I feel they are talking about.  As rightly stated by James and Michael there are many other automation tools that testers can use to aid their exploratory testing or human checking which could be classified as test automation.

So even though checking is a part of testing and can be useful, it may not be the best use of your time as a tester.  Sometimes there are various reasons why testers carry out machine checking such as:

  • Too expensive
  • Too difficult to automate
  • No time
  • Lack of skills
  • Lack of people
  • Lack of expertise. 

However if all you are doing during test execution is machine checking then what useful information are you missing out on finding? 

If we go back to the title of this article are you ‘checking’ or ‘testing’ today?

You need to make sure you ask yourself this question each time you test and evaluate which you feel would have most value to the project at that time.  We cannot continue to have the same ‘must run every test check manually’ mentality since this only addresses the stuff we know and takes no account of risk or priority of the information we have yet to discover.

The information that may be important or useful is hidden in the things we do not expect or currently do not know about. To find this we must explore the system and look for the interesting details that are yielded from this type of testing. (Exploratory)

I will leave you with the following for when you are next carrying out machine checking:

..without the ability to use context and expectations to “go beyond the information given,” we would be unintelligent in the same way that computers with superior compututional capacity are unintelligent.
J. S. Bruner (1957) Going beyond the information given.


Further reading






Tuesday, 17 September 2013

The Size of the Automation Box

I have previously written an article about doing too much automation and have recently been involved in successfully implementing automation and exploratory testing in agile environments.  During discussions about how to implement the automation framework a work colleague made the following statement in regards to defining the automation boundaries:
“Defining the size of the black box is critical”.  
 It made me think about the importance of knowing where your automation boundaries are and how critical it is for successful deployment of an automation framework.   One point to keep in mind when you try and implement an automation framework is the principal of K.I.S.S (‘Keep it simple stupid’)
I have seen (and been involved in) several failed automation implementations and in retrospect the main reason for failure has been trying to automate everything that you can possibly think to automate. I discussed this problem in the article too much automation and the importance of thinking critically about what to automate. Time and time again people go rushing ahead to try and automate everything they can without thinking about the boundaries of what their automation framework should be covering. 
 So what is the solution? The rest of this article will offer some guidelines that may help you in your quest of implementing a successful automation framework
The first thing to look at when trying to implement automation is the context of the testing you are carrying out or attempting to automate.  Are you involved in sub system testing, component testing, end to end testing, customer driven testing or any other type of testing?  It is important to look at this and found how what do you want the automation to do for you.  
  • At Customer level you may want to automate the common customer scenarios.
  • At a system level you may want to validate the input and outputs to a specific set of components and stub out the E2E system.
So defining your context of the purpose of your testing from an automation viewpoint is important.  The reason for this is that links to the second point, defining the size, or scope, of your automation for the project.  Most automation implementations fail to take into account the limits of what you intend to cover with the automation.  As my colleague put it 
The size of this will change depending on the context of the type of testing that you are carrying out.  For your automation implementation to be successful it is important that you define upfront your boundaries of what the automation is expected to cover.  Automation should be about what you input into the system and what you expect as an output everything in-between this is your black box .  So knowing what your testing context is should help define the size and limits of your black box knowing this information before you start to implement a test automation framework should greatly improve the chances of your automation implementation being successful.
Example of defining black box at a Component level of testing
    
Shaded Gray Area = Black box
For this example the boundaries would be
  • Create automated test inputs for Input A in relation to defined requirements and expectations
  • Automate expected outputs Output 1 and Output 2 based upon requirements and your expectations
  • What happens inside of Component X is of no interest since this is outside the boundaries of what we are automating

Another example of defining black box at a Component level of testing
    
  • Create automated test inputs for Input B in relation to defined requirements and expectations
  • Automate expected outputs Output 1, Output 2, Output 3 and Output 4 based upon requirements and your expectations
  • What happens inside of Component Y is of no interest since this is outside the boundaries of what we are automating
Example of defining black box at a Product level of testing
    
  • Create automated test inputs for Input A and Input B in relation to defined requirements and expectations
  • Automate expected outputs Output 1, Output 2, Output 3 and Output 4 based upon requirements and your expectations
  • What happens inside of Component Y is of no interest since this is outside the boundaries of what we are automating
I am aware that this appears to be common sense but time and time again people fail to take on board the context of what they expect the automation to be covering.
So next time you are involved in implementing an automation framework remember to define the size of your black box before you start your implementation that way you know the boundaries of what your automation will cover and check for you.

Friday, 2 August 2013

Stop Doing Too Much Automation

When researching my article on testing careers for the Testing Planet a thought stuck me about the amount of people who indicated that ‘Test’ Automation was one of the main learning goals of many of the respondents.  This made me think a little about how our craft appears to be going down a path that automation is the magic bullet that can be used to resolve all the issue we have in testing.
I have had the idea to write this article floating around in my head for a while now and the final push was when I saw the article by Alan Page (Tooth of the Angry Weasel) - Last Word on the A Word  in which he said much of what I was thinking. So how can I expand on what I feel is a great article by Alan?
The part of the article that I found the most interesting was the following:

“..In fact, one of the touted benefits of automation is repeatability – but no user executes the same tasks over and over the exact same way, so writing a bunch of automated tasks to do the same is often silly.”

This is similar to what I want to write about in this article.  I see time and time again dashboards and metrics being shared around stating that by running this automated ‘test’ 1 million times we have saved a tester running it manually 1 million times and therefore if the ‘test’ took 1 hour and 1 minute to run manually and 1 minute to run automated it means we have saved 1 million hours of testing.  This is so tempting and to a business who speaks in value and in this context this mean costs.  Saving 1 million hours of testing by automating is a significant amount of cost saving and this is the kind of thing that business likes to see a tangible measure that shows ROI (Return on Investment) for doing ‘test’ automation.  Worryingly this is how some companies sell their ‘test’ automation tools.

If we step back for a minute and go back and read the statement by Alan.  The thing that most people who state we should automate all testing talk about the repeatability factor.  Now let us really think about this.  When you run a test script manually you do more than what is written down in the script.  You think both critically and creatively , you observe things far from the beaten track of where the script was telling you to go.  Computer see in assertions, true or false, black or white, 0 or 1, they cannot see what they are not told to see.  Even with the advances in artificial intelligence it is very difficult for automation systems to ‘check’ more than they have been told to do.  To really test and test well you need a human being with the ability to think and observe.   Going back to our million times example.  If we ran the same test a million times on a piece of code that has not changed the chances of find NEW issues or problems remains very slim however running this manually with a different person running it each time our chances of finding issues or problems increases.  I am aware our costs also increase and there is a point of diminishing returns.  James Lyndsay has talked about this on his blog in which he discusses the importance of diversity   The article also has a very clever visual aid to demonstrate why diversity is important and as a side effect it helps to highlight the points of diminishing return. This is the area that the business needs to focus on rather than how many times you have run a test.

My other concern point is the use of metrics in automation to indicate how many of your tests have you or could you automate.  How many of you have been asked this question?  The problem I see with this is what people mean by “How many of your tests?”  What is this question based upon?  Is it...
  • all the tests that you know about now?
  • all possible tests you could run?
  • all tests you plan to run?
  • all your priority one tests?
The issue is that this is a number that will constantly change as you explore and test the system and learn more. Therefore if you start reporting it as a metric especially as a percentage it soon becomes a non-valuable measure which costs more to collect and collate than any benefit it may try to imply.  I like to use the follow example as an extreme view.

Manager:  Can you provide me a % of all the possible tests that you could run for system X that you could automate.
Me:  Are you sure you mean all possible tests?
Manager: Yes
Me: Ok, easy it is 0%
Manager:  ?????

Most people are aware that testing can have an infinite amount of tests even for the most simple of systems so any number divided by infinity will be close to zero, therefore the answer that was provided in the example scenario above.  Others could argue that we only care about of what we have planned to do how much of that can be automated, or only the high priority stuff and that is OK, to a point, but be careful about measuring this percentage since it can and will vary up or down and this can cause confusion.  As we test we find new stuff and as we find new stuff our number of things to test increases.

My final worry with ‘test’ automation is the amount of ‘test’ automation we are doing (hence the title of this article) I seen cases where people automate for the sake of automation since that is what they have been told to do.  This links in with the previous statement about measuring tests that can be automated.   There need to be some intelligence when deciding what to automate and more importantly what not to automate. The problem is when we are being measured by the number of ‘tests’ that we can automate human nature will start to act in a way that makes us look good against what we are being measured. There are major problems with this and people stop thinking about what would be the best automation solution and concentrate on trying to automate as much as they can regardless of costs. 

What ! You did not realise that automation has a cost?  One of the common problems I see when people sell ‘test’ automation is they conveniently or otherwise forget to include the hidden cost of automation.  We always see the figures of the amount of testing time (and money) being saved by running this set of ‘tests’ each time.  What does not get reported and very rarely measured is the amount of time maintaining and analysing the rests from ‘test’ automation.  This is important since this is time that a tester could be doing some testing and finding new information rather than confirming our existing expectations.  This appears to be missing whenever I hear people talking of ‘test’ automation in a positive way.  What I see is a race to automate all that can be automated regardless of the cost to maintain. 

If you are looking at implementing test automation you seriously need to think about what the purpose of the automation is.  I would suggest you do ‘just enough’ automation to give you confidence that it appear to work in the way your customer expects.  This level of automation then frees up your testers to do some actual testing or creating automation tools that can aid testing.  You need to stop doing too much automation and look at ways you can make your ‘test’ automation effective and efficient without it being a bloated, cumbersome, hard to maintain monstrosity (Does that describe some peoples current automation system?)  Also automation is mainly code so should be treated the same as code and be regularly reviewed and re-factored to reduce duplication and waste.

I am not against automation at all and in my daily job I encourage and support people to use automation to help them to do excellent testing. I feel it plays a vital role as a tool to SUPPORT testing it should NOT be sold on the premise that that it can replace testing or thinking testers.

Some observant readers may wonder why I write ‘test’ in this way when mentioning ‘test’ automation.  My reasons for this can be found in the article by James Bach on testing vs. checking refined.