Monday, 3 November 2014

Agile is.....

This post is inspired by the Michael Bolton post .. testing is...

I am becoming aware of more and more statements being made about the reduction in the need for dedicated testers in software development which follow the agile manifesto and principles. 

There is a growing call for developers to be both a tester and a developer and vice-versa.  Where the mantra is 'Everyone codes and everyone tests'.  I did a brief discussion on this topic in London for Tony Bruce.  In which we discussed the good and bad implications of this mantra.  Hopefully a new post will come out of that discussion at some point in the future.This post looks at what the agile manifesto and principles say in a testing context.  This is my thoughts on what the manifesto and principles of agile mean to testers and feel free to adapt,change, alter for your own purposes. When you come across people stating that there is no need for specialist testers in agile software development teams you can use some of the following:
  • Agile among other things - is a word used to describe a manifesto and a set of principles for helping to develop software
  • Agile among other things - is about delivering tested working software, which requires testing thinking skills
  • Agile among other things - is about satisfying the customer through early and continuous delivery of valuable tested software. 
  • Agile among other things - is about using testing thinking skills to discover if the software is of value to the customer. 
  • Agile among other things - is about empowering people to do good work, which requires people with testing expertise.
  • Agile among other things - is about using testing skills to help deliver frequent working software.
  • Agile among other things - is about using tools to support skilled testers.
  • Agile among other things - is about people with a variety of experiences & skills working together to create products that customers want.
  • Agile among other things - is about testing software so that the delivered software works in ways that a reasonable user expects.
  • Agile among other things - is about producing working software where as much information about how the software behaves has been uncovered by skilled testers.
  • Agile among other things - is about creating quality products using a diverse team with diverse range of skills and expertise.
  • Agile among other things - is about working with people from a variety of backgrounds, programmers, testers, business, help desk, support and trusting them to develop working software.
  • Agile among other things - is about testers informing others in the team by means of face to face communication what information of value they found during testing
  • Agile among other things -  is about using automation tools to check  what you believe to be true is still true, but it does not replace skilled human testing.
  • Agile among other things - is about being successful in delivering work tested software
  • Agile among other things - is about embracing changes and having the confidence that the testers will uncover information of value regarding the changes made.

Tuesday, 14 October 2014

Risk vs Uncertainty in Software Testing

Traditionally software testing appears to be based upon risk and many models and examples of this have been published, just search the internet for ‘risk based testing’.

The following are a few examples from a quick search 

The objective of Risk Analysis is to identify potential problems that could affect the cost or outcome of the project.  StÃ¥le Amland, 1999 http://www.amland.no/WordDocuments/EuroSTAR99Paper.doc

In simple terms – Risk is the probability of occurrence of an undesirable outcome ISTQB Exam Certification – What is Risk Based Testing 2014 - http://istqbexamcertification.com/what-is-risk-based-testing/

Risk:= You don’t know what will happen but you do know the probabilities, Uncertainty = You don’t even know the probabilities.  Hans Schaefer,  Software Test Consulting, Norway 2004 http://www.cs.tut.fi/tapahtumat/testaus04/schaefer.pdf

Any uncertainty or possibility of loss may result in non conformance of any of these key factors.  Alam and Khan , 2013 Rsik Based Testing Techniques A perspective study http://www.academia.edu/3412788/Risk-based_Testing_Techniques_A_Perspective_Study

James Bach goes a little deeper and introduces risk heuristics

“Risk is a problem that might happen” James Bach 2003 Heuristics of Risk Based Testing  http://www.satisfice.com/articles/hrbt.pdf

And continues with the following statement in the 'Making it All Work' section:

..don’t let risk-based testing be the only kind of testing you do. Spend at least a quarter of your effort on approaches that are not risk focuses..”

All of the examples above look at software testing and how to focus testing effort based upon risk they make no mention uncertainty. I have struggled to find any software testing models or articles on uncertainty which I feel can have value to the business in software projects. There are a few misconceptions of risk and uncertainty with people commonly mixing the two together and stating they are the same.  

Some of the articles appear to follow the fallacy of mixing risk with uncertainty and attempting to measure uncertainty in the same way as risk.  The issue I find with these articles in how you can measure something which has no statistical distribution?

One type of uncertainty that people attempt to measure is the number of defects in a product.  Using complex formulas based upon lines of code or some other wonderful statistical model.  Since the number of defects in any one product is uncertain I am unsure of the merits of such measures and their reliability.



The concern here is how would you define a defect?  Surely it is not only based upon the number of lines of code or number of test cases defined, but upon the uniqueness of each and every user?  In other words what some may see as defects others will gladly ignore and say it is ok, it is the character of the program.

Let’s look at what we mean by risk and uncertainty:

  • Risk: We don’t know what is going to happen next, but we do know what the distribution looks like.
  • Uncertainty: We don’t know what is going to happen next, and we do not know what the possible distribution looks like.

Michael Mauboussin - http://www.michaelmauboussin.com/

What does this mean to the lay person?

Risk can be judged against statistical probability for example the roll of a dice.  We do not know what the outcome (roll) will be (if the dice is fair) but we know the outcome will be a number between 1 and 6 (1 in six chance).

Uncertainty is where outcome is not known and there is no statistical probability. An example of uncertainty is what does your best friend intend to eat next week on Thursday at 5pm. Can you create a probability model for that event? 

Basically risk is measurable uncertainty is not.

“To preserve the distinction which has been drawn in the last chapter between the measurable uncertainty and an unmeasurable one we may use the term "risk" to designate the former and the term "uncertainty" for the latter.” : - Risk, Uncertainty, and Profit  Frank Knight 1921 -  http://www.econlib.org/library/Knight/knRUP7.html

The problem is that many people see everything as a risk and ignore uncertainty.  This is not a deliberate action and is how our brains work to deal with uncertainty. The following psychological experiment shows this effect 


The following example of the Ellsberg paradox is taken from the following article:  http://www.datagenetics.com/blog/december12013/index.html

_____________

Let’s play a different thought experiment. Imagine there are two urns.

  • Urn A contains 50 red marbles and 50 white marbles.
  • Urn B contains an unknown mixture of red and white marbles (in an unspecified ratio).


You can select either of the Urns, and then select from it a random (unseen) marble. If you pick a red marble, you win a prize. Which Urn do you pick from?

  • Urn A 
  • Urn B 


In theory, it should not matter which urn you select from. Urn A gives a 50:50 chance of selecting a red marble. Urn B also gives you the same 50:50 chance.

Even though we don’t know the distribution of marbles in the second urn, since it only contains red and white marbles, this ambiguity equates to the same 50:50 chance.

For various reasons, most people prefer to pick from Urn A. It seems that people prefer a known risk rather than ambiguity.

People prefer to know the risk when making a decision rather than base it on uncertainty.

Next experiment: This time there is only one urn. In this urn is a mixture or Red, White and Blue marbles.

There are 90 marbles in total. 30 are Red, and the other 60 are a mixture of White and Blue (in an unknown ratio). You are given a choice of two gambles:

  • Gamble 1 you win $100 if you pick a Red marble.
  • Gamble 2 you win $100 if you pick a White marble.


Which gamble do you take? Now that you've read a section above you will see that most people seem to select Gamble 1. They prefer their risk to be unambiguous. A quick check of the expected value of both gambles shows they are equivalent (with a ⅓ probability). They go with the known quantity.

____________

The summary of this is that we tend to trend towards known risks rather than uncertainty.

What has all of this to do with software testing?

The majority of our testing is spent on testing based upon risk, with outcomes that are statistically known.  This is an important task to do however does it have more value than testing against uncertainty?  Using automated tools it is possible to test against all the possible outcomes when we are using a risk based testing approach.  Risk is based upon known probabilities which machines are good at calculating and working through.

Since it is difficult to predict the future of uncertain events and we find it even more difficult to adjust our minds to looking for uncertainties then an exploratory testing approach may provide good value against uncertainties.  Tools here can be of use such as random data generators, emulators where the data used for testing is not based upon risk but is entirely random and can provide unexpected results.

The key message of this article is that we need to be aware of confusing uncertainty with risk and ask ourselves are we testing based upon risk today or upon uncertainty.  Each has value however sometimes one has more value than the other.

Tuesday, 30 September 2014

Latest Chapter of Book Published - Being Creative

I have published the latest chapter of my book The Psychology of Software Testing entitled 'Being Creative'.  This has been one of the most enjoyable chapters I have worked on and one that I am very proud of.  The following is a short extract from this chapter.

_________________

What is Creativity

Creativity is just connecting things. When you ask creative people how they did something, they feel a little guilty because they didn’t really do it, the just saw something. It seemed obvious to them after a while."
 Steve Jobs - Wired Magazine

Many have a misconception of what being creative means.  Take a moment to note down some words that you feel describe creativity.

Did any of your words match the ones below?




Creativity can be all of these are more. It does not help that there are a variety of definitions of creativity for example:

"Creative thinking is the generation of new ideas.”

or

"Creativity is the ability to combine ideas, things, techniques or approaches in a new way." 

Doppelt gives a good reason why it is so difficult to define creativity:

"Creativity is one of the words in the English Language which means many things to many people.  At various times it may mean different things to the same person."Doppelt J E 2012. What is creativity?

There is no correct definition for being creative and that is wonderful in itself, since you have no barriers to being creative.

When we talk about creating ideas it does not necessarily mean creating or coming up with something new.  The idea  or concept you came up with is new it may not be game changing or revolutionary.  The majority of ideas come from existing ideas or a combination of different ideas to create a new idea.

Software testing involves large amounts of creative thinking and not just during the test planning phase.  When   testing software we use creative processes to discover, uncover and learn.  When testing we should utilize these natural creative processes to guide our direction and future opportunities to test.  The majority of testers do this without even being aware that this is happening.  If you are following a test script or a testing charter how often do you go off the beaten track because you thought of a new creative approach?

To put this in another way, how often do you find ways to test the software that is both novel and unique?  Capturing this creative process is useful since you then have a record of your thinking at that time, which can help to produce even more ideas.  The creative process is iterative and by creating new ideas you end up utilizing these ideas to create even more ideas.

______________________


This chapter also includes some extras
  • Creativity Cue Cards
  • Software Testing SCAMPER poster
  • Software Quality Characteristics Poster
  • A JavaScript ideas generator
I will be presenting some of this material at the London Tester Gathering Workshops:

Creative and Critical Thinking and Testing Workshop
Thursday 16th and Friday 17th October 2014

The Skills Matter eXchange
116-120 Goswell Road,
 London, 
EC1V 7DP, 
GB

Friday, 19 September 2014

Agile testing activity checklist

As the barriers between development and test blur when working in scrum teams and being agile, the testing activities can sometimes be lost.  There may be occasions where there is no testing expertise in a scrum team and the scrum team members struggle to know what to focus on regarding testing activities in their sprint.

Since this was becoming an issue with some of the teams I was involved with, myself and others came up with a testing activities checklist based around Lisa Crispin and Janet Gregory agile testing quadrants.

The rest of this article shows this checklist.  Please feel free to adapt and change this checklist to meet your testing needs in agile scrum teams.  The only caveat I make on this is that if you find it useful please let me know.

Testing Activity Checklist
This is an example template for Scrum teams to use as a checklist for testing activities carried out during a sprint


Sprint Number:
Does the scrum team have any testing expertise
Yes / No 
Has the scrum team done any testing activities in this sprint?
Yes / No 


Unit/Component (Q1)

Check
Response
Comments and Justifications
Do Unit Tests exist?
Yes / No

What is the level of unit test coverage? (Sanity, bad input, edge cases, regression etc.)


What is the quality of the unit tests? (per quality key table below)


% Code coverage and have you met your target %? (Which tool used?)
nn%

Has any state coverage been done?
Yes / No

Zero static analysis violations
Yes / No

All check-ins have code reviews?
Yes / No

All check-ins have unit test reviews?
Yes / No

Are unit tests automated in a CI?
Yes / No

How often are unit tests run? (every check in of development code/nightly/other)


Unit testing added to Definition of Done (DoD) 
Yes / No



Functional (Q2)

Check
Response
Comments and Justifications
Functional tests exist?
Yes / No

Have acceptance test criteria has been reviewed?
Yes / No

What is the coverage of functional tests (See below - coverage key)
0-3

Are functional tests automated?
Yes / No

A CI System is being used for automated system tests?


How often are automated functional tests run? (every check in /nightly/other)


Are functional tests run against latest build?
Yes / No

Has manual functional testing been done (exploratory)?
Yes / No

Functional tests added to DoD 
Yes / No




Coverage (Key)
0
We have no good information about this area
1
Sanity Check: Major Functions & Simple Data
1+
More than sanity, but many functions not tested
2
Common cases: All functions  touched common/critical tests executed
2+
Some data. State or error coverage beyond level 2
3
Corner Cases: Strong data, state, error or stress testing

End to End (Q3)

Check
Response
Comments and Justifications
Has exploratory testing been done? (if not give justification)
Yes / No

How much time has been spent on exploratory testing (number of sessions)
nn

How has exploratory testing sessions been captured (Wiki/other tool)

What is the quality of the acceptance criteria (see below for definition of testing quality)


Acceptance criteria tested
Yes / No

Demo criteria tested
Yes / No

E2E customer tests added to DoD 
Yes / No



Quality Key
**
We know of no problems in this area that threaten to stop go live or interrupt testing, nor do we have any definite suspicions about any
**
We know of problems that are possible showstoppers, or we suspect that there are important problems not yet discovered
**
We know of problems in this area that definitely stop go live or interrupt testing

Non Functional (Q4)

Check
Response
Comments and Justifications
Nonfunctional tests exist
Yes / No

Which types of Non Function Tests exist?
Performance, Reliability, Usability, Stress, Spike, Scalability, Endurance, Volume, and Security (e.g., CSDL)


Yes ? No

Have nonfunctional tests been added to DoD
Yes / No



Testing Activities Definition of Done


Check
Response
Comments and Justifications
Has all DOD criteria been met for each quadrant?
Yes / No

Zero open defects in the sprint
Yes / No

100% of all possible automated  checks running (Unit, Functional / E2E)
Yes / No

100% automation pass rate.
Yes / No

Exploratory testing target met (% possible time spent exploring)
Yes / No

CI Builds are in place
Yes / No

Sprint demos and feedback given.
Yes / No



Testing Quadrant Dashboard
This is a simple checklist dashboard.  It is either green or red, have all the activities listed above for each quadrant been completed.  Yes=Green, No=Red.

Q1
Q2
Q3
Q4
 Green = Met DOD for that Quadrant /  Red = Not met DOD for that Quadrant

I have also uploaded a word document version here so you can adapt and change it to suit your needs.


Postscript:

I will be running a Creative and Critical Thinking workshop in London - Thursday 16th - Friday 17th October 2014