Showing posts with label Debrief. Show all posts
Showing posts with label Debrief. Show all posts

Monday, 3 December 2012

Ethnographic research feedback

Sometime ago I wrote an article about the relationship between ethnographic researchers and testers and how similar they are.  Recently Peter H-L (@Unlicensed2test) on twitter reminded me that I had also presented at the UNICOM  conference on using some aspects of ethnographic research to aid feedback when we are testing and from this I came up with a new mnemonic and a set of testing related social science questions.   I had thought that I had already posted this but it seems I had not.

What follows is taken from the talk I did.


*************
Within the article there was a section that dealt with questions that the researcher should be asking when studying the subject.  I changed this to make it relate to software testing and came up with the following:


  • Substantive Contribution: "Does the testing carried out contribute to our understanding of the software?"
  • Aesthetic Merit: "Does the software succeed aesthetically?" Is it suitable for the end user?
  • Reflexivity: "How did the author come to write this test…Is there adequate self-awareness and self-exposure for the reader to make judgements about the point of view?"
  • Impact: "Does this affect me? Emotionally? Intellectually?" Does it move me?
  • Expresses a Reality: "Does it seem 'true'—a credible account of a requirement'?"


Lynne Mckee has been updating a list of testing mnemonics on her blog site  so I thought about this and came up with the following mnemonic:

R.A.I.S.E


From this I created a list of questions under each of these heading that can be used to aid feedback when you have been testing, ideally when you are following session based test management.


Use the following template to do a personal review of the testing that you carried out during the day.
Please try not to answer using yes and no, expand on your reasons for either it being yes or no.
This debrief/review is more about your views, opinions and feelings rather than the product you have been testing.
It should only take you 10 minutes to complete this feedback – try not to write essays.


_______________

Reflect
Personal reflection:
  • Could you have done things better if so what? (Both from a personal and testing perspective)
  • Have you learnt new things about the product under test (That are not documented)?
  • Has your view of the product changed for better or for worse? Why has your view changed?


‘Epistemological reflexivity’ (What limits did we hit?)
  • Did your defined tests limit the information you could find about the product?  (Did you need to explore new areas that you had not defined)
  • Could your tests have been done differently? If yes how?
  • Have you run the right tests?
  • If you did things different what do you think you would have found out about the product?
  • What assumptions have you uncovered to be true/false?
  • Did the assumptions you make impede or help your testing?

Aesthetic:
  • In your opinion is the product suitable for the end user?
  • In your opinion is the product appealing at first look?
  • In your opinion is the product confusing?
  • In your opinion does the product flow?
  • In your opinion are there any ugly areas?
  • In your opinion does the product succeed aesthetically? Does it meet the image the customer is trying to portray?

Impact:
(this section is intended to be used to say how you 'feel' about the product, your first impressions, if you answer yes you should provide more details)
  • Does this affect you?
    • Emotionally?
    • Intellectually?
  • Does it move you?
  • Does it cause you negative/positive feelings?
  • Does it frustrate you?
  • Does it annoy you?

Substantial:
  • Have we covered a substantial amount of the key product areas?
  • Has the testing contributed to your understanding of the product?
  • Do you think you have a substantial understanding of the system and sub systems?
  • Does your knowledge of the system have any substantial gaps?
  • Could you easily explain the system to a first time user?

Expression:
  • Does the product seem 'true'—a credible account of a requirement'?
  • Does the product express what will happen in ‘real’ world?
  • Does the reality of the product match the expectations of the product?
  • Does the product express unexpected ways of working?

_______________


To make it easier I have create a MS word document with the questions in which you can download from Google docs here.

*************



Friday, 11 May 2012

The Importance of Worth


I am going to start this article with a reflection of when we were children.

I want you to imagine that day in school when you was a very young child and you produced your first ever painting.  You took all day to produce it, making careful use of colour and getting exactly how you wanted it to look.  At the end of the day you took your painting home to show your parents.  You were so excited and full of joy and expectations of what your parents would say about all the hard work you had done.  You ran into the living room with your painting in your hand and shouted out “Look Look, what I have done today”.  Your parents come over and take great interest in what you have produced, commenting about how clever you are and how wonderful you are.  They say how proud they are of you and they place your artwork on the fridge in the kitchen where everyone can see it.

Now if you have recalled this scene in your mind and many of you will do so.  How are you feeling?  Did the thought make you happy?  Did you feel pride in what you did?  Are you smiling at this thought?

So now let us zoom toward to present date…

You spend days/weeks/months (Cross out which is applicable) creating test scripts based upon assumptions, writing them up in whatever test case management system you have been told to use.  You put all your effort and thinking into being creative creating these step by step instructions for ‘testing’ the system.

After you have done this you then get ready to start testing using the work you have spent so long creating.  Once you start testing you realise that most of what you have already done upfront, all that effort is not going to be used.   So all these test scripts which you sweated over creating and completing in step by step precise detail, get ignored, never see the light of day, the labour of your work, forgotten and not commented on.

How often when we are told we must follow a scripted testing approach does this happen?  If we are honest it does happen a lot, I know to me in the past over half the scripts I created never got reviewed or used.  Half of the work I did was just forgotten about and left to gather dust in the test case repository.  I should make it clear that I am not against test scripting and that with the correct context they have value but indiscriminatingly forcing people to do something without experiencing it is in my opinion is such a stupid and pointless exercise.

Let us step back to our story from earlier.

Now imagine as a child you rush home to your parents with your painting in hand and once at home your parents take your painting and without saying a word lock it in a drawer and carry on with what they were doing.  How would you feel now?  Place yourself into the mind of your child self and imagine how would you feel?  Upset?  Sad?  Hurt?  Worthless?

So why when we do something as creative as testing do we do this?  We create so much in the way of test scripts but never get the chance to be proud of what we have done, what we have achieved.  We lock it away never to be used again, never to be talked about.  Is it any wonder that so many testers feel sad, unhappy and worthless in what we are being asked to do.  It is a key aspect of human nature that we want to show people what we have done, what we feel proud about, we need feedback to know that the tasks we are performing are worthwhile.  We need confirmation that we are valuable, needed and wanted.  If we continue to carry on with this path of insisting on doing pointless and useless tasks in which we then ignore or and just throw it away then we deserve to feel the way we do.

There are alternatives, using the exploratory testing approach can help prevent this waste; based upon only doing what is necessary at that time, using context.  Session based testing can make sure feedback on what you are doing becomes a key element of the testing approach.

Let us start to feel that we are important to software development and that testers are a worthy addition to this.

Some useful reading:

Session Based Test Management

What is Exploratory Testing 

Exploratory Testing Resources

Principles of Context Driven



Tuesday, 11 January 2011

The Feedback Loop

One of the critical elements of following the session based test management (http://www.satisfice.com/sbtm/) approach is the use of quick feedback. To achieve this it is suggested that a debrief should be done at the end of each session/day. Jon Bach (http://www.satisfice.com/articles/sbtm.pdf) suggest the use of PROOF

Past. What happened during the session?
Results. What was achieved during the session?
Obstacles. What got in the way of good testing?
Outlook. What still needs to be done?
Feelings. How does the tester feel about all this?

This approach is excellent for communicating what has happened during the testing session(s), however I keep hearing that people are not doing the debrief . There are many reasons why these are not being done, lack of time/resource or see no benefit are a few of the reasons given. This blog post is why it is important to carry out these debriefs and ensure they are done sooner rather than later.

I am looking at this from a psychology viewpoint to highlight the way our minds work and to keep reminding readers that software testing is a human sapient process and not an automated ticking of boxes process.

There are various studies that have indicated that the longer you take to act upon information the less you are able to recall that same information at a later date. During Eurostar 2010 Graham Freebur stated that unless you act upon information you had digested at the conference then within 72 hours that information would start to be lost and fade. The crucial part of this is that as humans we are fallible and lots of different psychological biases start to play with our minds so unless we can talk and pass on the information we have as soon as possible the more likely that the data we have will become clouded.

It is important that we debrief to someone to ensure that any error in our interpretation of the system under test can be corrected. The reasoning behind this is when we are testing a complex system we make assumptions as we test and the system may appear to confirm our assumptions and as such fuel what could be incorrect interpretations of the system. A computer system will never be able to inform you that your assumptions are wrong or right it could indicate a bias one way or another. The only way to repair errors in interpretations is to interact with a human being. This is the reasoning why debrief is very important so that any assumptions can be challenged and if necessary corrected.

As humans we are very good at being adaptive and changing our viewpoint and opinion when presented with new information but to do this effectively it needs to be a conversational setting, we are very bad at dealing with delayed feedback and the longer it is left the more likely we will keep our initial bias and interpretations.

The point of this rather short blog post is to explain why debrief after a testing session is important and that it needs to be done as soon as possible. Delays and excuses only cause more assumptions and incorrect information to appear to be the correct answer.

Make the time to debrief, plan for it and use it, it is crucial element of testing.