Monday, 19 July 2010

The Emotional Tester (Part 2)

The first part of this blog looked at how our emotions could affect how we test. This second part will look at how we could capture our feelings when testing and could this provide us with any useful information about the product we are testing. Could it prove to be a useful oracle when testing?

On twitter @testsidestory said the following:

That is done regularly in usability labs: capture emotions and facial expressions of the users as they use the s/w

This was in response to a question that I posted on twitter:

…. - what I am thinking is that we need to capture our mood when #testing it could indicate a problem in the s/w…

The concern with this is that it would be very expensive to implement for the majority of people. I thought how we could implement a system that could capture emotional state and be effective and inexpensive.

One idea I had was to use a concept from the book Blink by Malcolm Gladwell, in which Malcolm talks about how important our initial emotion/reaction is when we first encounter something. There is a discussion about how often our ‘gut reaction’ proves to be correct and he uses an example of a statue that a gallery had bought after a lot of scientific experts, who had tested the statue, had said the statue was genuine. A couple of art experts who got to see the statue before it was unveiled in private viewings had a ‘feeling; that there was something wrong about the statue, their initial gut reaction was telling them it was a fake. Several months latter it was discovered to be a fake.

The above is a concise retelling of the story within the book, however why did the scientific experts get it so wrong? Could it be that conformation bias played a part? The scientific experts wanted so much to believe that it was real and not fake they caused bias in the results or avoided obvious facts that pointed to it being a fake. I think confirmation bias is a great subject and one I will look at from a testing perspective sometime in the future.

  • So can we use this ‘gut reaction’ concept in testing?
  • Would it be of any value?

I should state that I have not tried any the following ideas and that if anyone would love to volunteer within their organizations to ‘trial’ the ideas out I would be most interested. Due to circumstances I currently do not have the ability to try this out on a large scale.

The first problem we face is how we capture out initial reaction to what we are testing. The requirements for this are that it is:

  • Easy to capture
  • Simple
  • Quick

My thought is to use different smiley’s which are simple and quick to create and capture thus covering all the requirements.

My idea would be to use three different smiley’s:


  • Happy
  • Neutral
  • Unhappy

Why use smiley’s?

The idea as to why use smiley’s is that anyone can draw them no matter how artistic and from the perspective of measurements it is very easy to recognize and see pasterns when using such well known symbols. The other longer term thought was that it is easy to extend to add sad, angry, and extremely happy if you wish to improve the range of emotions and feelings.

Capturing the initial feeling/emotion.

If you are working in an environment in which you are carrying out exploratory testing and following mission statements (Session based testing) then this is very simple to implement. The idea is that when the tester starts their mission (session) they should within the first couple of minutes (5 at a max) record their emotion/feeling of the software by the use of the smiley’s.

If this was done for every session being run and captured in such a way that it would be easy to see at a glance which areas (test charters) testers are unhappy with it could provide some useful information.

So you now have a whole set of data with regards to the testers initial feeling about the software there are testing, what does this information tell you?

For example a certain test focus area shows that all the testers are unhappy in that area would this indicate a problem? I feel it could indicate something wrong in that area but you would need to talk to the testers and gather more information (obtain context) I think the great thing about capturing initial feelings towards the software could help the development teams to focus on areas where there could be implied problems based upon initial feeling.

This approach could be taken a step further and get the testers to add another smiley when they have finished the session to see how they feel about the software after they have finished their session. You now have two sets of data and can compare any discrepancies with the two.

What would you think if the majority of testers were happy about a certain test focus area but at the end of the session they were unhappy?

Does this indicate a problem?

Or what if it was the opposite mostly unhappy and at end of session they were happy?

Also if they were unhappy at the beginning and at the end, their gut reaction proves to be correct, does this give an indicator that there are some major issues within that area?

Could this indicate frustration with the system, lack of knowledge maybe?

In my opinion this approach could provide to be a very useful oracle to the quality of the software.

What do think?

Could this prove to be useful?

I would love some feedback on this idea - good or bad.

3 comments:

  1. Hi John,

    Interesting post, just like part 1!

    Your question, "why did the scientific experts get it so wrong?" Hmm.

    I don't know that the scientific experts used any scientific methods, or used them correctly, or used them and made a wrong interpretation/conclusion.... Gladwell doesn't say either. So I'm a bit wary of the expert label. Not too fond of the scientific label either. These only give ideas about how people are "perceived" to work/operate and not how they really did operate.

    If I said some piece of software was tested by a professional software tester - would that tell you anything(?), without knowing more about (1) the person & reputation, (2) the methods they normally used (part of 1), (3) the methods actually used, (4) the assumptions and/or restrictions in place at time of testing, (5) anything else (silent evidence) not explicitly covered by 1-4? (I think Gladwell didn't expand on these considerations - although haven't got the book in front of me to check :( )


    Smiley's and session - yes, I like this.

    One thing though: Smileys will relate to the system and the mission I guess. Testers may work in the same part of the system with different missions I guess (or I assert.)

    I think James Bach used to use the thumbs up/down/neutral signal once upon a time. So I see no reason why a "short-hand" approach to instant feedback can't work - I use it when walking past colleagues and asking, "all ok?" - the frown, focus, smile or swearing gives me quick feedback.

    But, I like the smiley on a report - it's a quick signal that can be recognised!

    I'd be really interested if anything tries it and produces an experience report.

    ReplyDelete
  2. Thank you for the comments Simon.

    Some thought provoking stuff.

    Just to clarify - Gladwell did mention that a geologist ran two days of tests on the statue with a stereomicroscope and other equipment. So my initial thought was this was a scientific expert. You could be right on the fact that they did the wrong tests or that they saw what they wanted to see - back to conformation bias again!

    I am not sure about your second point - if someone had tested a piece of software and they class themselves (or even better their peers) as a professional tester I would have some confidence that the software would do the job it was intended to do. Otherwise if this is not the case we would end up with a paradox situation. If no one believed that software could be tested to a sufficient degree of confidence then there either would be no testers or everyone would have to test the software for themselves. When you buy a piece of software do you know any of the 4 points you made about the people who tested the software? I think this comes down to the perception of the company and trust.

    Your assertion about people exploring the same part of the system is correct. A test charter defines the test focus areas and within each test focus area there can be multiple missions.

    I am still looking for any volunteers is a middle sized team to try this out for me :o)

    Once again Simon I appreciate the comments especially when they are challenging and make me think.

    ReplyDelete
  3. Hi John,
    My comment (second point) was about labels - a little philosophical - I was making the point that saying a product had been tested by a tester (or even a professional software tester) is not necessarily enough - you need knowledge about the person/company, their usual methods and reporting - the label in itself doesn't say anything.

    If you had two unknown testers in front of you for an interview and one said he was a software tester and the other said she was a professional software tester - I suspect you would assume very little about the person's abilities from the title - you'd want to know more about them (their story, experience and attitude) before attaching any meaning to the labels in their cases.

    Yes, it would be good if the software tester label gave a certain expectation or perception of what sort of service you might expect - but I'm not sure the testing world is there yet.

    Would the "certified" label help? Not necessarilly (in my book)...

    (end of philosophical safari)

    ReplyDelete