Thursday, 31 March 2011

An update or two.

I noticed that I have not written a blog article in awhile so I thought I would put together a short article on what I have been up to so that regular readers can be sure that I am still alive and well.

On the personal front we have had a few health scares over the past month hence my lack of tweeting or blogging.

On the work front I have been very busy and involved in a few different and exciting projects while continuing to look at different ways in which we can improve.

During this period I have been looking more and more into ethnographic research and its connection to testing. I find this area of social science fascinating and how much it appears to collate to testing. Since there does appear to be a connection to this I am current running a couple of case studies internally based upon methods from ethnographic research as mention by Richardson in their article for Qualitative Inquiry: Evaluating ethnography

The findings for this case study will be presented at the UNICOM Next Generation Testing conference 18/19 May 2011

If you cannot make this event I do intend to give a very basic/quick introduction to this approach the Software Testing Club meet up in Oxford on the 14th April 2011 This event will be used as a world premier for the approach I have been working on so that definitely makes it worth attending. Or the fact that Lisa Crispin and Rob Lambert will be there should tick everyone’s box.

Without giving away too much detail before the meet up here is a brief summary of the approach I have been investigating

  • The concept is based upon questioning the tester as much as you question the product being tested.
  • It is check-list that can be used on an individual basis and should take between 5 and 10 minutes. The idea is to look at what you are doing and checking it is the right thing and see if you missing anything.
  • I will be giving away the check-list on the evening of the meet up. (wow a freebie)

Have I given away too much information, not enough or left you wanting more?

If you want to know more then I suggest you sign up to attend the meet-up or the UNICOM conference

Thursday, 10 March 2011

Is Context Driven Testing a gimmick?

My inspiration for this article has been a comment I received with regards to trying to organise an internal Rapid Software Testing course .

Someone made a comment that they felt that this ‘smacks of a gimmick’ but would be interested in finding out what people discover/get out of this in relation to the way we work.

The views expressed in this article are solely my own and are based upon my own experiences and knowledge of the testing profession.

Currently within the testing world there appears to be two schools of thought:

The traditional (standards) approach – driven by the ISTQB examination board (formerly the ISEB)

And there is the

Context Driven Testing concept– driven by such people as Cem Kaner, James Bach and Michael Bolton.

This article is not about entering a debate to say one approach is better than the other and IMO I think a good balance is a mixture of the various approaches. There are many other approaches and concepts to testing than the two mentions above such as the Agile and the analytical approaches but the main discussions within the testing community appear to mainly refer to the two 'schools' listed above.

The principles and concept of context driven testing is that the emphasis is about thinking, experiencing and doing rather than assuming and making interpretation of what people believe the system should do. It is more about ‘hands-on’ and learning about the system as you test.

A common misconception is that there is no planning involved within rapid software testing and that it is based upon ‘free’ testing and it is without structure or discipline. In my experience and from using the material from the rapid software testing course there is more planning, structure and focused based testing than with any other approach. The introduction of session based testing in which testers have a mission and a goal to aim for during their testing session ensures that testing remains focused and on track.

The difference between the two approaches in that the standard approach is mainly used to define testing (test cases) before you have actually have access to the system under test. There is an inherent weakness in this approach in that assumptions are made that requirements, design and user needs are correct, accurate and not missing.

Once actual testing has started the majority of testers revert to working in a context driven way. They adapt the scripts they have written; they think of new ones and make decisions on what not to run. The context driven approach is to have some lightweight upfront planning which is ambiguous and allows the tester the freedom to approach the testing by using their logic and adapting as they learn more about the system. This allows the tester to build up knowledge of the system while executing, creating new tests and recording what they are testing. This is the basic definition of exploratory testing. Time is not wasted on creating tests that will never be run or maintaining a list of test scripts with incorrect test steps. It is about recording what is happening at the time testing happens and storing the results of that testing session. This then can if possible be automated and as a test never run manually again.

The difference between the approaches is that rapid software testing requires testers to think as they test and not just tick boxes. It forces the tester to question what they see and allows them to freely explore and discover new things about the system. It uses triggers (heuristics) to keep asking the question "Is there a problem here?"

It does this by comparing the product with similar product, looking at the history of the product or the claims made by the product. These are tests in which there is no yes or no answer it depends on the context and the thinking of the tester.

Is context driven testing a gimmick?

IMO it is not

It is a natural way to test products and it is the way testing has been done since it became a career choice. However people have not admitted (or will not admit) to following this approach or be aware that they are doing this.

The whole concept and approach of rapid software testing is to give it a name and provide useful skills and tools to improve this methodology which follows the thinking of context driven testing.

_______________________________________________________

FOOTNOTE:

After a lively discussion on twitter with James Bach I feel I need to clarify some misuse of definitions.

James gave a great description to show the differences between Rapid Software Testing and Context Driven:

Rapid Testing is a testing methodology that is context-driven.
But context-driven testing is not Rapid Testing.

After this 'revelation' I have made some minor changes to the original post.


Monday, 21 February 2011

Measuring Testing

I saw a couple of tweets by @Lynn_Mckee recently on the metrics that are used in testing.

There are many great papers on #metrics. Doug Hoffman's "Darker Side of Metrics" provides insight on behavior. http://bit.ly/gKPHcj #testing

Ack! So many more that are painful... Scary to read recent papers citing same bad premises as papers from 10 - 15 yrs ago. #testing #metrics

And it made me think about how we measure testing.

This article is not going to be

'This is how you should measure testing’

or

offer any ‘best practice’ ways of measuring

My concern with any of the ways in which we measure is that it is done without context or connection to what question you which to have answered with the numbers. It is a set of numbers devoid of any information to their ‘real’ meaning. There are many and various debates within the software testing field about what should and should not be measured. My take on all of this is:

Can I provide useful and meaningful information with the metrics I track?

I still measure number of test cases that pass and fail and number of defects tests and fixed.

Is this so wrong?

If I solely presented these numbers without any supporting evidence and a story about the state of testing then yes it is wrong it can be very dangerous.

I view the metrics that are gathered during testing to be an indication that something might be correct or wrong, working or not working, I do not know this just from the metrics it is from talking to the team, debriefing and discussing issues.

I capture metrics on requirement coverage, focus area coverage, % of time spent testing, defect reporting, system setup. So I have a lot of numbers to work with which on their own can be misleading, confusing and misinterpreted. If I investigate the figures in detail and look for patterns I notice missing requirements, conflicting requirements and what is stopping me executing testing.

So what is this brief article saying?

Within the software testing community I see that we get hung up on metrics and how we measure testing and I feel we need to take a step back.

It is not too important what you measure but how you use and present what measurements you have captured. It is the stories that go with the metrics that are important, not the numbers.


Wednesday, 2 February 2011

What you believe might not be true. (Part 2)

The first part of this article looked at conjunction bias and framing bias and how it can influence our thinking towards incorrect assumptions all under the heading of cognitive bias.

The next part of this article investigates other forms of bias and how they influence our decisions and thought processes. One of the first I will touch upon in this article is belief bias.

Belief bias has many similarities to confirmation bias and in some ways both are closely linked. If someone has very strong beliefs they can use arguments that back up their beliefs in such a way that only evidence that supports their beliefs are used giving a confirmation bias to their beliefs. There are many examples of this in the world from the belief in the existence of aliens to the range of conspiracy theories that are abound on the internet.

So what is belief bias?

People will tend to accept any and all conclusions that fit in with their systems of belief, without challenge or any deep consideration of what they are actually agreeing with.


Belief bias is the conflict a person incurs when their beliefs do not match the logic of what is presented to them

The danger with belief bias is that it can quickly turn to belief projection:

Psychological projection is a form of defence mechanism in which someone attributes thoughts, feelings, and ideas which are perceived as undesirable to someone else.

The problem now is that the beliefs of someone on a team could become fostered on other team members using belief projection even if what they believe is unfounded. Within software development we all have our own views and beliefs on what a piece of software is expected to do.

How does this have an impact on software development and especially testing? Imagine a situation in which a tester has a very firm belief on how an interface should interact. They then test that interface and find it is not behaving as they believe it should be. A bug report is now raised and passed back to the development team. It is found that the bug was raised in error and that the interface interacts as designed and described in the requirements. This is simple case in which regardless of what requirements, design specifications and others are saying the testers strong belief bias is saying everyone is wrong and what they believe in is correct.

In the world in which we as testers operate I doubt that the above would happen since we are now in situations in which developers and testers communicate and there is no more throwing it over the wall way of releasing. However if you still work in teams in which there is a lack of communication and talking then belief bias can have a large negative affect on testing.

Another issue is when you do work in a team and belief projection comes into the equation. If someone on your team subconsciously has a belief that the developers think the testers are a waste and not necessary (negative personality trait) then could project this on to other members of the team and start to cause a barrier of resentment to build up between teams. It is impossible to prevent people having opinions and thoughts about other members of a team but having an environment in which everyone is allowed to express their views and thoughts in an open discussion can help to remove this type of bias. Within one company in which I worked as a team lead I would have an open session in which nothing was recorded or written down but people could express views and thoughts on what was really happening within the project. Sometimes it would be heated and people would get emotional but it managed to clear the air. One important part of this method was that a mediator was always in charge to prevent it getting into a naming calling situation.

Another bias which could have an impact on testing is illusory correlation in which people form a connection between two events even when all evidence shows that there is no such connection or relationship. A good example is people who have arthritis believe that their condition worsens depending on the weather. Redelmeier and Tversky conducted an experiment in which they took measurements based upon the patient’s view of their condition and at the same time noted details meteorological data. Even though nearly all the patients believed that their condition got worse during bad weather the actual results shown that there was a near zero correlation between the two.

Wikipedia defines illusory correlation as:

Illusory correlation is the phenomenon of seeing the relationship one expects in a set of data even when no such relationship exist.


It is easy to see the effect this can have on software development. Imagine if a developer creates an illusory correlation between two variables that do not have any real correlation therefore introducing bugs into the project. There has been a study on the reasons for software errors and it has been found that illusory correlation does play a part. Details of this study can be found here.

Stereotypes are normally defined by the use of illusory correlation. Someone who came from a small town where everyone was kind makes an assumption that everyone from a small town is kind therefore when they go out into the world and meet a kind person they correlate that the person much be from a small town since they are kind even if the correlation is not true or does not make any sense.

How does this help or hinder with regards to software testing?

The problem occurs when testers work in isolation and form they own methods and create their own hypotheses of what should happen when they test the product under certain conditions. The danger is that the testing becomes one sided searching for evidence that matches their current hypothesis of how the product should react. The resulting factor of this bias within testing is that conditions are tested which meets the illusory correlation of the tester but conditions which do not meet the expected assumptions are not tested. This could then cause significant bugs to be missed due to flows not meeting the expected correlation being tested.

It is very difficult to avoid falling in to the illusory correlation trap since the human mind tries to take the easiest path and groups’ objects together for easier recall, hence the existence of stereotypes. To help to avoid this cognitive bias it is again important to not work in isolation and to involve others in both your planning for testing (kick offs), the execution of your testing (pairing) and the result of your testing (debriefs)

There are many other biases that I have yet to touch upon and some I might save for future articles including one or two that could have a positive affect when it comes to testing

In the meantime while you wait for my next post @Qualityfrog tweeted a link to a whole bunch of fallacies and their meaning here:


That should keep you occupied for awhile.

I wonder how many of these fallacies affect your day to day testing?

On a positive note since developers will also suffer from these fallacies when coding, there will always be a need for testers…….



Wednesday, 26 January 2011

What you believe might not be true. (Part 1)

When I started to look at how the human mind works and the traps that it continually falls into I did not realise what a huge area of psychology this is. The subject of bias and the human mind is fascinating and every tester should be aware that every decision we make when testing a product will be subjected to our cognitive biases.

I have previously touched upon how bias can affect our judgement when I wrote the blog post about confirmation bias and cognitive dissonance. We need to have awareness that what we think could be wrong and subjective to our own biases. There are ways to try and reduce cognitive biases by the use of pairing and debriefing however that is not the subject of this blog. The purpose of this blog is to look at some of the common cognitive biases in relation to their effect on testing.

I shall start by defining the term cognitive bias:

A cognitive bias is a mistake in reasoning, evaluating, remembering, or other cognitive process, often occurring as a result of holding onto one's preferences and beliefs regardless of contrary information.


Within the psychology field of cognitive biases there are many different types of biases some of which I have previously discussed. Within this article I will look at a few more which could have an effect on our testing. The whole area of cognitive bias is huge and I could write many more blogs on different types of bias and it is something I may return to at some point. Since it such a large area I may not go into great detail about each type of bias but give enough information for people reading this blog to be aware of the failings of our human minds.

One bias that intrigues me is called the conjunction effect.

A definition of this bias is described below:

When two events can occur separately or together, the conjunction, where they overlap, cannot be more likely than the likelihood of either of the two individual events. However, people forget this and ascribe a higher likelihood to combination events, erroneously associating quantity of events with quantity of probability.


An example of this can be seen when using the experiment that Amos Tversky and Daniel Kahneman carried out:

Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.

Which is more probable?

A) Linda is a bank teller.
B) Linda is a bank teller and is active in the feminist movement.

In the experiment 86% of people answered B even when using mathematics it can be proven than A is more probable. This is the conjunction fallacy in action. When your mind tricks you into believing something is more probable than it is.

Another experiment from Tversky and Kahneman during 1983 two different experimental groups was asked to rate the probability of two different statements, each group seeing only one statement:

  • A complete suspension of diplomatic relations between the USA and the Soviet Union, sometime in 1983.
  • A Russian invasion of Poland, and a complete suspension of diplomatic relations between the USA and the Soviet Union, sometime in 1983.

Even though the probability of each happening was low there was a significant difference in people choosing that the second statement was more likely.

The moral? Adding more detail or extra assumptions can make an event seem more plausible, even though the event necessarily becomes less probable.

How does this all relate to testing?

Within testing we have to look at statements and judge what is most probable to happen.

For example we see the following requirements:

Req1:If the user is a software engineer then screen ‘is engineer’ must be shown.

Req2: If the user is a software engineer and likes to listen to classical music then screen ‘music engineer’ must be shown.

Now if we look at the following user story.

David went to university to train as a classical violinist. Once leaving university David retrained as a software engineer ad started to write code for a major software house.

Which is more likely to be true?

Most people will see Req2 as the most likely to be true, and ignore Req1. However the probability that Req2 is more likely in the given user story is far less than Req1.

Now with these two requirements you can see a conjunction effect, in which one of the statements appear to more likely than the other and as testers we should be aware of this. However our minds might not notice that there is a conjunction and our human bias takes over so when we test we assume that only req2 is valid and ignore req1. However the probability of req2 is less than req1, we need to be aware of this bias and try to eliminate it. This is why context is important, we need to apply context to the situation and to the test to ensure it is valid, correct and nothing is being missed.

The issue we face as testers is that some requirements and the resulting test ideas we come up with could be subjected to the conjunction effect were our minds are telling us that the probability of event x happening is greater than the probability of event y even when if we look at the mathematics the probability between each event is the same or event x actually has a lower probability.

How can we prevent his bias?

Experiments that have tried to repeat the Tversky and Kahneman bank teller fallacy noted that if before the decision was made people were allowed to discuss and communicate their thoughts with others then conjunction fallacy occurrence was significantly lower.


This indicates that it is possible to reduce the chance of conjunction fallacy occurring just be simply communicating and talking with other people.

When researching this area for the blog I found that there are a lot of links to cognitive framing:


The problem being that the way something is worded (framed) can lead people to be subjective to conjunction fallacy. Maybe this is something that architects and technical authors need to be aware of when creating design and requirement documents? How a requirement is framed could influence a developer to write code in certain way and become subjected to the conjunction fallacy giving more probability to some event or requirement that is actually true, developers please be aware of this.

What fascinates me on this particular area is the idea that the way things are worded (framed) and how our mind understands these words (conjunction fallacy) could be the reason that a lot of bugs are created in code. I would love to gather data on this and see if there is some correlation.

So if we look at framing and how it can influence our thought process and cause bias. The problem with framing is that it can be subjective. If we frame a sentence in such a way as to force people to believe that a certain fact is true then there will be some people in which it will have no influence. Framing is used a great deal within politics and advertising to encourage people to belief in a certain policy or product. It can be very powerful as a tool. Michael Bolton ran a workshop at Eurostar 2010 on test framing (add link) and this was very useful to help the tester to think if a test needs to be run or why it was not run however it was only after Eurostar that I had a thought that framing can be used in a opposite direction and become dangerous as a tool. It could be used to force people to think that the wrong view is the correct view. An example of this is to have a list within a test plan of 1000s of test cases but to frame it in such a way that to the casual reader every single test case must be run and is of high value. This is before the tester has actually touched or used the product to be tested. The framing is used to justify wasted effort and work.

For example look at the following statement:

I prefer to code using Java than C++ because it is easier with the development suite I have installed and C++ does not work within the development environment I have set up.

The framing bias here is that the person thinks Java is superior to C++ because of how they have set up their environment. It maybe that using C++ would make it easier for the developer on the project they are working on but they are trying to give reasons why they are working in Java instead of C++.

There are many more examples in the world of software development. I prefer OS x to OS y because of z. Machine type ‘a’ is far better than machine type ‘b’ because it does ‘xyz’. It does not matter that the person making the statement has never used OS y or machine type ‘b’. They are forming a bias viewpoint and using framing to justify it. IMO this is why the context driven way of testing is so important. When looking at requirements and statements it is easier to think of ‘in which context’ to verify if the requirement is just and sound.

As testers we need to be aware of this and when we report our findings during testing be aware of how we frame the words.

Part 2 of this article will look at belief projection and how it may hinder our testing efforts.

Tuesday, 18 January 2011

Are testers’ ethnographic researchers?

People who follow me on twitter or via this blog might be aware that I have a wide range of interests in areas outside my normal testing job. I like to research and learn different things, especially psychology, and see if it may benefit and improve my skills and approaches during my normal testing job. One area I have being looking at for awhile is the social science of ethnography. The approaches used when carrying out research appears to have many similarities to software testing and I feel we could benefit and maybe improve our testing skills by examining ethnography.

IMO there are two areas in which we can learn from ethnography:

  • To improve our understanding of users and how they differ by using ethnographic methods
  • Use ethnographic methods to test software in an exploratory way.

I should start by explaining what my understanding of ethnography is:

Wiki attempts to define it here:

http://en.wikipedia.org/wiki/Ethnography

The free dictionary attempts to give a definition here:

http://www.thefreedictionary.com/ethnography

A better definition can be found here:

http://www.brianhoey.com/General%20Site/general_defn-ethnography.htm

The problem with trying to describe and define ethnography is that it has wide and varied meanings.

To me it is a branch of the study of humanity (anthropology) in which the researcher actively gets involved and participates with the study group rather than just sitting back and observing. The reporting is doing using qualitative (words) measurements rather than rely on quantitative (numbers) measurements.

One of the key factors when approaching ethnographic research is to be aware that participation, rather than just observation, is one of the keys to the approach. Does this not sound familiar to testing, especially exploratory testing? Actively using the software under test to find out about its characteristics and behaviour are similar to a ethnographic researcher living within a community and participating with that community to learn about its beliefs and characteristics. There appears to be very close parallels between ethnographic research and exploratory testing. Wikipedia states:

One of the most common methods for collecting data in an ethnographic study is direct, first-hand observation of daily participation.

How similar is that to testing software?

Another approach within ethnography is the use of grounded theory to explain the results from the participation. This is when the data is used to provide theories about the data. This is different from grand theory in which the theory is defined without the use of real life examples and therefore has a danger of not fitting the actual data gathered afterwards (is this similar to scripted and exploratory, grand theory vs grounded theory?)

Grounded theory is a constantly evolving set of conclusions that can continue indefinitely based upon the changing data being obtained by the ethnographic researcher. One of the questions that are asked about ethnographic research is:

When does this process end?

One answer is: never! Clearly, the process described above could continue indefinitely. Grounded theory doesn't have a clearly demarcated point for ending a study. Essentially, the project ends when the researcher decides to quit. (http://www.socialresearchmethods.net/kb/qualapp.php)

How similar is this to testing?

When do we stop testing?

Many articles have been written on this subject and mainly we stop when we can learn nothing new, no time or ran out of money. See this article by Michael Bolton for more information

I feel that ethnographic research stops because of similar reasons.

One interesting section I saw within the wiki article was about the process of ethnographic research in which to aid the researcher areas were split and the research asked questions.

  1. Substantive Contribution: "Does the piece contribute to our understanding of social-life?"
  2. Aesthetic Merit: "Does this piece succeed aesthetically?"
  3. Reflexivity: "How did the author come to write this text…Is there adequate self-awareness and self-exposure for the reader to make judgements about the point of view?"
  4. Impact: "Does this affect me? Emotionally? Intellectually?" Does it move me?
  5. Expresses a Reality: "Does it seem 'true'—a credible account of a cultural, social, individual, or communal sense of the 'real'?"

I thought about this and started to change the context to be about software testing:

  1. Substantive Contribution: "Does the testing carried out contribute to our understanding of the software?"
  2. Aesthetic Merit: "Does the software succeed aesthetically?" Is it suitable for the end user?
  3. Reflexivity: "How did the author come to write this test…Is there adequate self-awareness and self-exposure for the reader to make judgements about the point of view?"
  4. Impact: "Does this affect me? Emotionally? Intellectually?" Does it move me?
  5. Expresses a Reality: "Does it seem 'true'—a credible account of a requirement'?"

By doing this I found I suddenly had a set of heuristics to measure against the software testing that has been carried out, yet again more similarities between the two crafts.

Another area in which ethnographic research can be useful to software testing is when you need to test software that has a lot of UI interactions. Using the methods of ethnography a tester could go visit the users and observe and participate in their daily routine to find out the common tasks carried out and what oddities are seen. The oddities are the things of greatest interest since these are the things that would not normally be planned for and without active participation with the users would normally not be uncovered until it is too late.

There are many studies being carried out to determine if ethnographic research should be used when designing software system, however my concern with this is that it appears to be stuck in the design up front way of working which is not a flexible iterative approach, in my view it is easier, quicker and cheaper to ensure that testers use ethnographic methods when testing to ensure the design is suitable for users or even better get the users involved earlier and observe them earlier.

The more I have delved into the study of ethnography the more and more I have seen similar patterns to software testing. This makes me aware that software testing is not solely a hard science but a craft that encompasses many disciplines outside of the typical number crunching and algorithm creating world of software development.

Within the testing profession we need to look outside of the box and find approaches, methods, structures that can improve the discipline. To ensure our craft grows we need to ensure we do not narrow out field of vision or thought.

Friday, 14 January 2011

Remember you’re a Tester

I want you to remember one word in the following list:

Bug
Insect
Ant
Dragon Fly
Ladybird
Crane Fly
Beetle
Bee
Wasp
Hornet
Cockroach
Earwigs
Termite
Grasshopper
Flea
Mosquito

My previous post was about debrief and how important it is to testing.

The problem I have come across during debrief has been trying to remember all the things that happened during the day or during the session(s). Maybe this is a, me getting older thing, and my memory is going.

One thing whilst I was reading recently about cognitive bias seemed to be a bias that could be helpful both during testing and during the debrief sessions. This was called the Von Restorff Effect and basically it was how our brains remember things that stand out.

http://changingminds.org/explanations/memory/von_restorff.htm

The above link uses an example of a list of words in which one word is in a different colour, our brains are more likely to remember the word that is a different colour and stands out.

You might be asking what connection does this have to testing.

Michael Bolton via twitter pointed me towards Adam White who has an interest in the Von Restorff effect. In his blog Adam states the following:

I use the Von Restorff effect in testing all the time. I frequently notice what doesn’t belong and it tends to be what I remember the most. http://www.adamkwhite.com/2007/09/30/using-heuristics-to-cook/

Part of our skills as a tester is noticing:

  • When something does not appear to fit – we notice
  • When something appears out of place – we notice
  • When something appears not quite right – we notice.

Could it be that testers have a strong Von Restorff cognitive bias? Maybe this is the missing ‘thing’ that people say testers have. You can not describe it but you just know it is a skill you have.

Going back to this article…..

How can this help during debrief?

My thoughts on this are that to remember something that is important for the debrief when we are working within SBTM. We should make a note of what it is and ensure we highlight it in a different way to make it stand out and ensure that we remember it later. Maybe some people already do this (the use of the highlighter pen).

Maybe the excellent tool for recording sessions, Rapid Reporter by Shmuel Gershon be expanded. Can it have an option to highlight certain things to make them stand out. I know it can do rtf and bold but that is not enough for me. I need highlighting and colouring, plus an option to do freehand doodles.

Why doodles you may ask?

One of the side effects of the Von Restorff effects is that we remember words better if associated with a picture. If I need to remember a URL is not working I would doddle a chain with a link missing. Or an interface that is failing to communicate I could draw a face with a plaster over the mouth. Just little things that help me remember problems that occurred. By the way I am rubbish at drawing not on the same level as the cartoon tester.

To conclude this article I think as testers we already have a cognitive bias to remembering things that stand out but within testers it appears we notice these things a lot more, either via our continuing training or a natural skill we possess. We need to ensure that important things we need to remember for debriefs are made to stand out during our testing sessions to ensure we do not forget them.

Which word did you remember from the list at the beginning?

Was it termite?