Showing posts with label Risk. Show all posts
Showing posts with label Risk. Show all posts

Monday, 5 June 2017

Usage of words

I came across the following tweet:


I tried to reply on Twitter but the message I tried to portray did not come across in the way I wanted it to.

Disclaimer:  I am not, nor ever have been a member of any cult.

Marlenas' words came across very strongly and appear to be based upon their negative experience when encountering discussions on the use of these words.  Others stepped in with their own experiences and the main message seems to be that the use of these words have been to derailed important discussions.  I find that a shame, since to me the distinction with these words has been useful to help talk to executives and others from outside the testing world about the risks of unfocused automation and testing.

My concern in the statement by Marlena is that the distinction is of low value  and a semantic argument.   Semantics and the meaning of words is vital for society to be able to flourish and this has been going on for a long time.  People have argued over what certain words mean and over time the meaning of some words change.  Some are taken over to deride or insult people and sometimes these words are reclaimed by those who are being insulted.  For example the word "Queer" to some this is a hostile word to others it is a badge of honor.

I worked in Israel for awhile and often would get strange looks when running workshops and replying to a question I would say 'smallish' It was awhile before I figured out that 'ish; is Hebrew for 'man' and I was saying 'small man'.  Culturally words can have different meaning and cause confusion, the same can be said of the words'checking 'and 'testing'  Using these words in the right situation and context to inform and have a discussion can be useful however if used to make a point or win an argument it becomes less useful.  If used in an attempt to show superior intellect then the discussion is already lost.

I use the distinction between the words when discussing the testing effort.  How much checking has been done against the amount of testing that has been done.  How much effort have we spent on putting in place explicit knowledge, information we feel we know, against the effort on information that we do not know, tacit.  Knowing the difference between these two items can be vital to help mitigate risk.  If all the effort and money is being spent on checking with very little testing then there could be a risk that something we do not know could be dangerous.  Unless we spend a little more effort on testing to uncover more of what we do not already know then there is unknown risks.  Another example could be that the product is mature and changes are minor so more effort is put into the checking.

For me having these meanings helps to inform and tell a story.  I do not use them to score points or be a member of a cult I use them because they have a value to me in my context.  I do not really care if you use these words or not.  I have explained how I use them and the usefulness I find in them.  Yes I will discuss with people why I feel the distinction has value but at the same time I respect others opinions and viewpoints.  To me it is a useful tool to be able to communicate with teams around the world.


Tuesday, 14 October 2014

Risk vs Uncertainty in Software Testing

Traditionally software testing appears to be based upon risk and many models and examples of this have been published, just search the internet for ‘risk based testing’.

The following are a few examples from a quick search 

The objective of Risk Analysis is to identify potential problems that could affect the cost or outcome of the project.  StÃ¥le Amland, 1999 http://www.amland.no/WordDocuments/EuroSTAR99Paper.doc

In simple terms – Risk is the probability of occurrence of an undesirable outcome ISTQB Exam Certification – What is Risk Based Testing 2014 - http://istqbexamcertification.com/what-is-risk-based-testing/

Risk:= You don’t know what will happen but you do know the probabilities, Uncertainty = You don’t even know the probabilities.  Hans Schaefer,  Software Test Consulting, Norway 2004 http://www.cs.tut.fi/tapahtumat/testaus04/schaefer.pdf

Any uncertainty or possibility of loss may result in non conformance of any of these key factors.  Alam and Khan , 2013 Rsik Based Testing Techniques A perspective study http://www.academia.edu/3412788/Risk-based_Testing_Techniques_A_Perspective_Study

James Bach goes a little deeper and introduces risk heuristics

“Risk is a problem that might happen” James Bach 2003 Heuristics of Risk Based Testing  http://www.satisfice.com/articles/hrbt.pdf

And continues with the following statement in the 'Making it All Work' section:

..don’t let risk-based testing be the only kind of testing you do. Spend at least a quarter of your effort on approaches that are not risk focuses..”

All of the examples above look at software testing and how to focus testing effort based upon risk they make no mention uncertainty. I have struggled to find any software testing models or articles on uncertainty which I feel can have value to the business in software projects. There are a few misconceptions of risk and uncertainty with people commonly mixing the two together and stating they are the same.  

Some of the articles appear to follow the fallacy of mixing risk with uncertainty and attempting to measure uncertainty in the same way as risk.  The issue I find with these articles in how you can measure something which has no statistical distribution?

One type of uncertainty that people attempt to measure is the number of defects in a product.  Using complex formulas based upon lines of code or some other wonderful statistical model.  Since the number of defects in any one product is uncertain I am unsure of the merits of such measures and their reliability.



The concern here is how would you define a defect?  Surely it is not only based upon the number of lines of code or number of test cases defined, but upon the uniqueness of each and every user?  In other words what some may see as defects others will gladly ignore and say it is ok, it is the character of the program.

Let’s look at what we mean by risk and uncertainty:

  • Risk: We don’t know what is going to happen next, but we do know what the distribution looks like.
  • Uncertainty: We don’t know what is going to happen next, and we do not know what the possible distribution looks like.

Michael Mauboussin - http://www.michaelmauboussin.com/

What does this mean to the lay person?

Risk can be judged against statistical probability for example the roll of a dice.  We do not know what the outcome (roll) will be (if the dice is fair) but we know the outcome will be a number between 1 and 6 (1 in six chance).

Uncertainty is where outcome is not known and there is no statistical probability. An example of uncertainty is what does your best friend intend to eat next week on Thursday at 5pm. Can you create a probability model for that event? 

Basically risk is measurable uncertainty is not.

“To preserve the distinction which has been drawn in the last chapter between the measurable uncertainty and an unmeasurable one we may use the term "risk" to designate the former and the term "uncertainty" for the latter.” : - Risk, Uncertainty, and Profit  Frank Knight 1921 -  http://www.econlib.org/library/Knight/knRUP7.html

The problem is that many people see everything as a risk and ignore uncertainty.  This is not a deliberate action and is how our brains work to deal with uncertainty. The following psychological experiment shows this effect 


The following example of the Ellsberg paradox is taken from the following article:  http://www.datagenetics.com/blog/december12013/index.html

_____________

Let’s play a different thought experiment. Imagine there are two urns.

  • Urn A contains 50 red marbles and 50 white marbles.
  • Urn B contains an unknown mixture of red and white marbles (in an unspecified ratio).


You can select either of the Urns, and then select from it a random (unseen) marble. If you pick a red marble, you win a prize. Which Urn do you pick from?

  • Urn A 
  • Urn B 


In theory, it should not matter which urn you select from. Urn A gives a 50:50 chance of selecting a red marble. Urn B also gives you the same 50:50 chance.

Even though we don’t know the distribution of marbles in the second urn, since it only contains red and white marbles, this ambiguity equates to the same 50:50 chance.

For various reasons, most people prefer to pick from Urn A. It seems that people prefer a known risk rather than ambiguity.

People prefer to know the risk when making a decision rather than base it on uncertainty.

Next experiment: This time there is only one urn. In this urn is a mixture or Red, White and Blue marbles.

There are 90 marbles in total. 30 are Red, and the other 60 are a mixture of White and Blue (in an unknown ratio). You are given a choice of two gambles:

  • Gamble 1 you win $100 if you pick a Red marble.
  • Gamble 2 you win $100 if you pick a White marble.


Which gamble do you take? Now that you've read a section above you will see that most people seem to select Gamble 1. They prefer their risk to be unambiguous. A quick check of the expected value of both gambles shows they are equivalent (with a ⅓ probability). They go with the known quantity.

____________

The summary of this is that we tend to trend towards known risks rather than uncertainty.

What has all of this to do with software testing?

The majority of our testing is spent on testing based upon risk, with outcomes that are statistically known.  This is an important task to do however does it have more value than testing against uncertainty?  Using automated tools it is possible to test against all the possible outcomes when we are using a risk based testing approach.  Risk is based upon known probabilities which machines are good at calculating and working through.

Since it is difficult to predict the future of uncertain events and we find it even more difficult to adjust our minds to looking for uncertainties then an exploratory testing approach may provide good value against uncertainties.  Tools here can be of use such as random data generators, emulators where the data used for testing is not based upon risk but is entirely random and can provide unexpected results.

The key message of this article is that we need to be aware of confusing uncertainty with risk and ask ourselves are we testing based upon risk today or upon uncertainty.  Each has value however sometimes one has more value than the other.

Sunday, 18 September 2011

Risky Business

Within the testing profession we are all aware of risk and in the majority of cases we adjust our testing based upon risk. Is this the wrong approach to take? What models do you use to assign risk to elements within the project?

In my experience in most situations the risks we apply are based upon things we known could go wrong or disrupt the testing we are going to carry out. Most risk assessment is done before hand and upfront. It is normally based upon the probability of what could occur to a system based upon someone’s experiences, viewpoint and biases at a given time. I am not sure this is the correct approach to take within testing.

Testing is not an exact science there are some elements where we can predict the outcomes and risks, yet there are far more where it is unpredictable. The thoughts behind this blog post are to look at this unpredictability and how we can try to include that into our testing approach.
Nassim Nicholas Taleb (1) within his book The Black Swan (2) talks about the highly improbable and its impact on the stock market. He states that the majority of investments are based upon risk and use models in which known risks are taken into account. What these models do not include are the improbable risks things such as natural disasters (3) or individuals/countries (4) do something that cannot be predicted.

In conclusion Taleb says that most models are based upon using top down predictions using experiences of what has already happened which is a high risk strategy rather than plan against the unpredictable the things that cannot be planned for.

So how can this apply to testing?

How many times within testing have we seen a last minute showstopper, just before go live? Or a showstopper discovered in the live system when some, what appears to be a totally random set of circumstances happen (multi failure of various unconnected components – recent power failure within the USA (5)). Could this have been predicted as a risk? Would people have built this into their models? IMO I doubt this.

Do we need to change the way we use risk within our testing? Taleb talk about using stochastic tinkering (6) which to me is fascinating since it appears to match closely to the exploratory testing approach. As an example look at the following two statements:

Thus stochastic tinkering requires experimenting in small ways, noticing the new or unexpected, and using that to continue to experiment.

The general principle is: Do as little as possible unless the system shows you have to do more, then do only as much as you need to keep the process going.

If we change the wording of these statements so that they apply to testing:

Thus stochastic tinkering requires TESTING in small ways, noticing the new or unexpected, and using that to continue to TEST.

The general principle is: Do as little as possible unless the system shows you have to do more TESTING, then do only as much as you need to keep the TESTING going.

Does the exploratory testing approach (by design or accident) do this already? To me it appears as if by using exploratory testing instead of using detailed, well planned, risk assessed test scripts we are more likely to discover the ‘black swans’

Food for thought…

References:

Friday, 5 February 2010

Child’s Play

This article is based upon some thoughts that I have had over the past year while watching my granddaughter playing and learning new things, she will be two years old in April. It is amazing how quickly children of that age learn to do tasks without being taught such as walking, the beginnings of communication and how to play and explore.

I am an exploratory tester and my thinking is how we can as tester harness what children do naturally? Some may say that they do exploratory testing and they feel it is natural. If that is the case why do so many testers have difficulty adapting to exploratory testing and keep falling back to scripted testing?

Peter (unlicensed testers) asks a lot of questions about children and learning in his blog article here: http://007unlicensedtotest.blogspot.com/2009/11/what-do-you-get-if-you-cross-7-month.html

Watching my granddaughter the other day I observed that she was trying to put her trousers on. At first she managed to get both legs into the same leg hole, she noticed that this did not feel right so started again but this time she tried to put both legs into one of the small leg holes and found that this did not work either. After this she then managed to get the trousers the correct way around and one leg in each hole but did not pull them up and ten tried to walk and fell over.

What can we learn from this?

We can see that she tried following different options and observing the results, she then thought she had completed the task but found that it was not really complete. So if we convert this to software testing we can see that she is using heuristics to determine how to do the testing, the trial and error approach. She is using her emotions and feelings that something is not correct and she is doing a lot of noticing, which is something that every good exploratory tester should be doing. There are also examples of mentally noting future areas to test, the fact that when she tried to walk she fell over. The next time she tried to put on her trousers she did manage to pull them all the way up before setting off to walk.

There are many other examples of trial and error that children appear to do when playing. If something does not feel right they will suddenly change the approach to the problem or in some cases they just give up.

What can we learn from children playing? I have observed that this exploring behaviour appears to start diminishing once children start to attend full time school. Why is this so?

Is it because schools start to impose on children their own ethos and standards and re-model children to not take risk?

One of the main elements of testing IMO is the taking of risks.

We all do the ‘Let us try this to see what happens’, ‘Let us try something else and see what happens.’ Many corporations are risk adverse and as such when testers are brought in they have to provide a return on investment (a hot topic on twitter at the moment) so they are less likely to follow a risky approach. Some may argue that exploratory testing is not risky and I would intend to agree with them. However the business world does not seem to afford the time to be able to cope with let us try this and see what happens and then let us try that and see what happens. They require order and structure and no risk.

Children on the other hand when learning and playing do not take risk into account they try and if it does not work try something else and if that does not work they continue trying until they get a result they are happy with. They remain focused on the task at hand but appear to be able to solve problems without the fear of failure.

So what happens as we get older? Why do we lose this ability to explore and learn without fear of failure? It appears to be a natural human instinct that somehow is gradually removed as we get older.

Is it to do with the education system and how they remove the risk factor and make everyone fear failure and taking risks? Do we become institutionalize in to conforming to the known path, to stop asking the probing questions and to stop playing? I am not sure I have the answer to these questions however I am sure as testers there are some valuable lessons we could learn from children and how they explore, learn and play. (Simultaneous learning, test design and test execution)

I wonder if James would not mind if we change the definition of exploratory testing to learn, explore and play?

I think everyone who wants to learn more about exploratory testing should take some lessons from children. Do not be afraid to explore, if you make a mistake learn from the mistake to improve the next time and have fun. Testing should be about having fun and enjoyment it should not be a chore. If it becomes a chore get a different job……

Does anyone have any interesting games for testers? If so I would love to hear from you.
___________________________________


My next blog should be on my experiences of being coached by Michael Bolton and Jon Bach on using and managing Exploratory Testing.