Showing posts with label ethnography. Show all posts
Showing posts with label ethnography. Show all posts

Tuesday, 14 April 2015

Qualitative Research Coding and Software Testing

This article was first publish in Jan 2015 on the Ministry of Testing Website - http://www.ministryoftesting.com/2015/01/qualitative-research-coding-software-testing/


Qualitative Research Coding and Software Testing

Coding is used extensively in the field of social science for qualitative analysis.  Coding is not defined in the same ways as it is for software development.  The following quote by Saldaña provides a useful definition of coding within the social science world:
A code in qualitative inquiry is most often a word or short phrase that symbolically assigns a summative, salient, essence capturing, and/or evocative attribute for a portion of language-based or visual data  [1]
“By “language-based or visual data”, Saldaña means video, audio recordings or written notes.   Once the information has been gathered the researcher then assigns ‘a word or a short phrase’ to represent sections of the data. Then the researcher examines and categorises the codes to see what patterns and theories emerge.  These activities together are known as qualitative research coding.   For example if a researcher were studying the morals of teenagers, the researcher would record conversations and then use coding activities to see what patterns emerge.  Based upon these patterns the researcher would form theories about the morals of teenagers.
Testers using this type of coding may find it helps to label; organise and classify their testing observations.  It provides structure not only to the testers’ observations, but also to the process of observing.  Observation and gathering information are important aspects of testing.
The changing minds website describes qualitative coding as:
Coding is an important technique in qualitative research such as anthropology [2), ethnography [3] and other observer and participant-observer methods. [4]
Strauss stressed the importance of coding when carrying out qualitative analysis:
Any researcher who wishes to become proficient at doing qualitative analysis must learn to code well and easily. The excellence of the research rests in large part on the excellence of the coding. [5]
The same can be applied by testers when analysing their testing effort, there is a need as a tester to be able to ‘code well and easily’. Testers should examine the testing evidence that has been gathered and use coding to formulate flexible theories and ideas of the behaviour of the software.
There appears to be some comparisons between qualitative research coding and exploratory testing [6]; the table below describes some of these comparisons.
Social ScienceExploratory Testing
Evidence based upon immersion in the culture under investigationEvidence based upon exploring the software under test
Theories formed based upon the evidence gathered during immersionTheories formed based upon the evidence gathered during exploration
Theories adjusted and altered as new evidence is gatheredTheories adjusted and altered as new evidence is gathered
Further investigations take place to uncover evidence that may support or disprove current theoriesFurther exploration takes place to uncover evidence that supports or disproves current thinking about the behavior of the software under test
Coding and classifications of evidence to identify patternsCoding of evidence from exploratory sessions to identify patterns and risks
Table of comparisons between social science research methods and exploratory testing
Testers can utilise social science coding to analyse the testing evidence gathered to help form theories about the behaviour of the software. These theories can be presented to stakeholders to provide valuable information and support decision-making.
When people first start to use social science coding they can find it complex and daunting.  To help simplify coding Chris Hann created the following diagram:
coding_pyramid
Coding pyramid [7]
The diagram shows the different levels of coding that social scientists go through to form and reform theories.
Testers can use social science coding not only for test execution, but also for other testing activities such as test planning, testing discussion and test reporting.

Coding in action

The following is an example of using level 1 coding for user stories.
User Story
As a user
I want to login in securely
So that my private information is kept private
A tester using coding uses the following codes for this user story. (Each code is separated by the | symbol)
Codes
Security | Login | Operations |Function
The tester then looks at more user stories and codes:
User Story
As a user
I want to record a currently playing live show
So that I can watch the show at a later time
Codes
Recording | Live TV | Device Remote | Operations | Data |Time | Platform | Functions
User Story
As a user
I want to playback a currently recorded show
So that I can watch the recorded show now
Codes
Recording | Playback | Device Remote | Operations |Time | Data | Structure | Interface
The examples above make use of the SFDPOT [8] heuristic.  This heuristic is a useful way of utilising social science coding to identify testing coverage gaps.   When coding it is acceptable to assign multiple codes to each piece of information or evidence gathered.
  1. The process of coding can be defined in the following steps: (taken from [9])
  2. Decide which types of coding are most relevant
  3. Start coding
  4. Create a start list of codes
  5. Generate categories (pattern codes)
  6. Test these categories against new data (start with contrasting data early on!)
  7. Write about categories/pattern codes in a memo to explain their significance
The following is an example of a tester using these steps:
The testers’ initial testing effort showed that a particular API appeared to work correctly with the customer data set used.  The tester coded this as ‘API interface: customer data set ingests’.  The tester formed a theory that the API had been implemented and appeared to work for that data set. The tester then tested their theory using a different customer data set and the behaviour was inconsistent, this led to a change in the testers’ theory based upon the evidence gathered. This is level 2 and 3 on the coding pyramid diagram.

Memos and questioning

Testers when carrying out coding can use ethnographic research methods [10] by asking themselves the following questions.
  • What is the software doing?
  • What is the software trying to accomplish?
  • How does the software accomplish this?
  • Does the user understand what is being accomplished?
  • What assumptions does the software make about the user?
  • What surprises you about how the software is behaving?
  • What do I see going on here? (To track your assumptions)
  • What did I learn from the notes I have taken?
  • What intrigued me? (To track your positionality [11] – where do you stand, what biases could be at play?)
  • What disturbed me? (To track your tensions, beliefs, attitude)
These questions have been adapted based upon work by (Sunstein and Chiseri-Strater, 2007: 106) Field Working: Reading and Writing Research. [12]
Categorisation, level 4 on the coding pyramid, is the concept of tying together a number of observations and seeing if they have any common characteristics. These characteristics or connections in social science coding terms are called memoing.
Glaser defines memoing as:
(A memo is) the theorising write-up of ideas about codes and their relationships as they strike the analyst while coding… it can be a sentence, a paragraph or a few pages… it exhausts the analyst’s momentary ideation based on data with perhaps a little conceptual elaboration (Glaser, 1978: 83) [13]
The following gives a testing example of this:
After testing a User Interface, the tester defines the following codes from the gathered testing evidence:
UI inconsistent | error messages unclear | undermined feature behavior
The tester memos these codes as ‘unwarranted user behaviour.’ Ideally though, the tester should write more detailed memos than the example given.  The purpose of a memo is to write down your thoughts and reflections on the information you have gathered.
The more extensive the testers’ memos are the more they can form concrete theories.  This aspect of coding can prove to be valuable for testers, since the completed memos can be used as evidence of what the system they are testing is really doing.
Coding can be used when analysing the evidence gathered during testing; the tester can group together and code similar behaviours and observations to see if there are any patterns.  These patterns can be used by the tester to guide their testing effort and provide focus for areas that they may find useful to explore.
Ian Day in his book Qualitative Data Analysis – A user friendly guide for social scientists [14] makes the following statement about coding.
The term ‘coding’ has a rather mechanical overtone quite at odds with the conceptual tasks involved in categorising data. This arises from the association of coding with a consistent and complete set of rules governing the assignment of codes to data, thereby eliminating error and of course allowing recovery of the original data simply by reversing the process (i.e. decoding). Qualitative analysis, in contrast, requires the analyst to create or adapt concepts relevant to the data rather than to apply a set of pre-established rules.
Since coding is a conceptual approach, having pre-defined rules or processes before the tester has the evidence is not something that fits easily in to a qualitative research approach.   Codes are relevant to the actual evidence that has been gathered rather than based upon assumptions of what that evidence may be.

Summary

The use of qualitative coding in testing provides many benefits that testers can utilise across all testing activities, some of these benefits include:
  • Pattern Recognition
  • Theory Forming
  • Critical Thinking
  • System Thinking
  • Testing Gap Analysis
  • Meaningful reporting
  • Evidence gathering
Learning about social science coding and applying it in practice can be a powerful tool in a testers testing skills portfolio.  Those who promote that testers should learn to code are correct; nevertheless they may want to explore alternative coding approaches.
For those who wish to learn more about the connection between social science and software testing I recommend the following articles and books as useful starting points:
With special thanks and appreciation for their help and patience in making this a much more complete article than it was to begin with:

References

  1. The Coding Manual for Qualitative Researchers – Saldaña, 2013 –http://www.amazon.com/The-Coding-Manual-Qualitative-Researchers/dp/1446247376
  2. What is Anthropology – American Anthropological Association (website – Last accessed (Aug 2014) – http://www.aaanet.org/about/whatisanthropology.cfm
  3. Brian A. Hoey. “A Simple Introduction to the Practice of Ethnography and Guide to Ethnographic Fieldnotes” Marshall University Digital Scholar (2014): 1-10. Available at:http://works.bepress.com/brian_hoey/12
  4. Changing Minds – Ethnographic coding –http://changingminds.org/explanations/research/analysis/ethnographic_coding.htm
  5. Qualitative Analysis for Social Scientists, 1987, p. 27 – Anselm L. Strauss –http://www.amazon.com/Qualitative-Analysis-Social-Scientists-Strauss/dp/0521338069
  6. What is Exploratory Testing – James Bach –http://www.satisfice.com/articles/what_is_et.shtml
  7. Techniques and Tips for Qualitative Researchers – Chris Hann –http://qrtips.com/faq/FAQ–code%20terms.htm
  8. How Models Change – Michael Bolton –http://www.developsense.com/blog/2014/07/how-models-change/
  9. 8 Qualitative codes and coding (2014) – Heather Fordhttp://www.slideshare.net/hfordsa/qualitative-codes-and-coding
  10. Are testers’ ethnographic researchers? Stevenson (2011) –http://steveo1967.blogspot.com/2011/01/are-testers-ethnographic-researchers.html
  11. What is positionality in practitioner research? – Dissertation Scholar (website Last accessed August 2014) http://dissertationscholar.blogspot.mx/2013/04/what-is-positionality-in-practitioner.html
  12. Field working Reading and writing research – Sunstein and Chiseri-Strater (2007: 106) – http://www.amazon.com/FieldWorking-Reading-Writing-Research-Edition/dp/0312622759
  13. Theoretical Sensitivity: Advances in the Methodology of Grounded Theory – Glaser (1978: 83) http://www.amazon.com/Theoretical-Sensitivity-Advances-Methodology-Grounded/dp/1884156010
  14. Qualitative Data Analysis: A User Friendly Guide for Social Scientists – Ian Day (1993)http://www.amazon.com/Qualitative-Data-Analysis-Friendly-Scientists/dp/041505852
    X

Monday, 3 December 2012

Ethnographic research feedback

Sometime ago I wrote an article about the relationship between ethnographic researchers and testers and how similar they are.  Recently Peter H-L (@Unlicensed2test) on twitter reminded me that I had also presented at the UNICOM  conference on using some aspects of ethnographic research to aid feedback when we are testing and from this I came up with a new mnemonic and a set of testing related social science questions.   I had thought that I had already posted this but it seems I had not.

What follows is taken from the talk I did.


*************
Within the article there was a section that dealt with questions that the researcher should be asking when studying the subject.  I changed this to make it relate to software testing and came up with the following:


  • Substantive Contribution: "Does the testing carried out contribute to our understanding of the software?"
  • Aesthetic Merit: "Does the software succeed aesthetically?" Is it suitable for the end user?
  • Reflexivity: "How did the author come to write this test…Is there adequate self-awareness and self-exposure for the reader to make judgements about the point of view?"
  • Impact: "Does this affect me? Emotionally? Intellectually?" Does it move me?
  • Expresses a Reality: "Does it seem 'true'—a credible account of a requirement'?"


Lynne Mckee has been updating a list of testing mnemonics on her blog site  so I thought about this and came up with the following mnemonic:

R.A.I.S.E


From this I created a list of questions under each of these heading that can be used to aid feedback when you have been testing, ideally when you are following session based test management.


Use the following template to do a personal review of the testing that you carried out during the day.
Please try not to answer using yes and no, expand on your reasons for either it being yes or no.
This debrief/review is more about your views, opinions and feelings rather than the product you have been testing.
It should only take you 10 minutes to complete this feedback – try not to write essays.


_______________

Reflect
Personal reflection:
  • Could you have done things better if so what? (Both from a personal and testing perspective)
  • Have you learnt new things about the product under test (That are not documented)?
  • Has your view of the product changed for better or for worse? Why has your view changed?


‘Epistemological reflexivity’ (What limits did we hit?)
  • Did your defined tests limit the information you could find about the product?  (Did you need to explore new areas that you had not defined)
  • Could your tests have been done differently? If yes how?
  • Have you run the right tests?
  • If you did things different what do you think you would have found out about the product?
  • What assumptions have you uncovered to be true/false?
  • Did the assumptions you make impede or help your testing?

Aesthetic:
  • In your opinion is the product suitable for the end user?
  • In your opinion is the product appealing at first look?
  • In your opinion is the product confusing?
  • In your opinion does the product flow?
  • In your opinion are there any ugly areas?
  • In your opinion does the product succeed aesthetically? Does it meet the image the customer is trying to portray?

Impact:
(this section is intended to be used to say how you 'feel' about the product, your first impressions, if you answer yes you should provide more details)
  • Does this affect you?
    • Emotionally?
    • Intellectually?
  • Does it move you?
  • Does it cause you negative/positive feelings?
  • Does it frustrate you?
  • Does it annoy you?

Substantial:
  • Have we covered a substantial amount of the key product areas?
  • Has the testing contributed to your understanding of the product?
  • Do you think you have a substantial understanding of the system and sub systems?
  • Does your knowledge of the system have any substantial gaps?
  • Could you easily explain the system to a first time user?

Expression:
  • Does the product seem 'true'—a credible account of a requirement'?
  • Does the product express what will happen in ‘real’ world?
  • Does the reality of the product match the expectations of the product?
  • Does the product express unexpected ways of working?

_______________


To make it easier I have create a MS word document with the questions in which you can download from Google docs here.

*************



Thursday, 31 March 2011

An update or two.

I noticed that I have not written a blog article in awhile so I thought I would put together a short article on what I have been up to so that regular readers can be sure that I am still alive and well.

On the personal front we have had a few health scares over the past month hence my lack of tweeting or blogging.

On the work front I have been very busy and involved in a few different and exciting projects while continuing to look at different ways in which we can improve.

During this period I have been looking more and more into ethnographic research and its connection to testing. I find this area of social science fascinating and how much it appears to collate to testing. Since there does appear to be a connection to this I am current running a couple of case studies internally based upon methods from ethnographic research as mention by Richardson in their article for Qualitative Inquiry: Evaluating ethnography

The findings for this case study will be presented at the UNICOM Next Generation Testing conference 18/19 May 2011

If you cannot make this event I do intend to give a very basic/quick introduction to this approach the Software Testing Club meet up in Oxford on the 14th April 2011 This event will be used as a world premier for the approach I have been working on so that definitely makes it worth attending. Or the fact that Lisa Crispin and Rob Lambert will be there should tick everyone’s box.

Without giving away too much detail before the meet up here is a brief summary of the approach I have been investigating

  • The concept is based upon questioning the tester as much as you question the product being tested.
  • It is check-list that can be used on an individual basis and should take between 5 and 10 minutes. The idea is to look at what you are doing and checking it is the right thing and see if you missing anything.
  • I will be giving away the check-list on the evening of the meet up. (wow a freebie)

Have I given away too much information, not enough or left you wanting more?

If you want to know more then I suggest you sign up to attend the meet-up or the UNICOM conference