Showing posts with label Thinking. Show all posts
Showing posts with label Thinking. Show all posts

Tuesday, 15 September 2015

Testing skills - Abductive Reasoning

This is the first in a series of hopefully short articles which looks at  skills and techniques outside the traditional testing that can be useful to those who practice testing.

Future topics planned include:

  • Influencing by listening
  • Note Taking
  • Leading teams
  • Persuasion and how to sell
  • Speaking the language of business 
  • Remote teaching experiential style
  • Going beyond the model
If you can think of any others that would be useful please leave a comment. 

I am making these public,  since by writing it down and making it public I am committing myself to do it.  That is your first tip in this article, if you want to commit to doing something which you keep putting off, writing it down and make it public.

Abductive Reasoning.

 Abduction came about from the work of Jo Reichert, in this work Reichertz came up with another cognitive logic process to describe discovery when the researcher encountered surprising findings in the data. He called this "a cognitive logic of discovery".  Before this there were two types of reasoning in common use, 'inductive' and 'deductive'.
  • Inductive - Making generalized conclusions from specific observations
  • Deductive - Proving or disproving a theory from observations (scientific method)
Abductive reasoning is an important process for those involved in testing.  The majority of time when we are testing we discover surprising behavior in the software.  This normally makes us rethink our theories of how the software works and as such we begin to re-evaluate our understanding of the software.  We create a new rule, or test idea to further investigate the surprising element of what we have just tested. This is key within grounded theory; our thoughts about the software and how it behaves change as we explore the software more.  How we report these surprises and the behavior of the software is crucial to the value that testers provide to a project.
"There are two strategies involved in abduction, both of which require creating the conditions in order for abductive reasoning to take place"* (Reichertz, 2007: 221). 
The first is a ‘self-induced emergency  situation’ (Reichertz, 2007: 221). This means that in the face of not knowing what to make of a surprising finding, rather than dwelling on the infinite number of  possibilities, the analyst puts pressure on themselves to act by committing to a single meaning.
The second strategy is completely antithetical to the first. It involves letting your mind wander without any specific goal in mind, or what Pierce (1931–1935), a key writer on abduction, called ‘musement’* (Reichertz, 2007: 221). "
Qualitative Research Methods in Psychology: From core to combined approaches - Nollaig Frost - 2011.
Reichertz makes the following observation about these two strategies.
"What these two quite antithetical strategies have in common is tricking the thinking patterns of the conscious mind in order to create ‘an attitude of preparedness to abandon old convictions and to seek new ones."
 The SAGE Handbook of Grounded Theory:(Sage Handbooks) - Antony Bryant, Kathy Charmaz  - 2010.
Testers need to be able to abandon their old convictions and seek out new ones.  This is especially important when we are testing software, since our biases and beliefs and previous experiences can influence our decision making. Using some of the methods described in this book can allow us to challenge our thinking about the software and engage in abductive reasoning.

One famous use of abductive reasoning is that used by the fictional detective Sherlock Holmes by Sir Arthur Conan Doyle.  Many people believe, wrongly, that Sherlock Holmes uses deductive reasoning to solve his cases, when in reality he used abductive reasoning.
"Holmes' method doesn't resemble deductive reasoning at all. Instead, it's much more similar to a form of reasoning known as "Abductive Reasoning"Debunking Sherlock Holmes Myths - Maiza Strange  May 2014
To summarise abductive reasoning is taking your best guess based upon your current knowledge, observations and experiments.  These pieces of information maybe incomplete but you use your cognitive reasoning processes to form a theory or conclusion.  For example:
"A medical diagnosis is an application of abductive reasoning: given this set of symptoms, what is the diagnosis that would best explain most of them? Likewise, when jurors hear evidence in a criminal case, they must consider whether the prosecution or the defense has the best explanation to cover all the points of evidence. While there may be no certainty about their verdict, since there may exist additional evidence that was not admitted in the case, they make their best guess based on what they know."Deductive, Inductive and Abductive Reasoning - Butte College.
This article has been taken from the Testing and the Social Science chapter in my book - The psychology of Software #Testing.


Wednesday, 29 January 2014

Using games to aid tester creativity

Recently Claire Moss  blogged about potty training and how this came about from a card game called Disruptus  I introduced to the Atlanta Testing meet up while I was in the USA.  This reminded me that I was going to blog about how I use this tool in a workshop and in my day to day testing to improve upon my own and teams testing ideas.  The workshop is a creative and critical thinking and testing workshop which I intend to deliver at the London Tester Gathering in Oct 2014 – early bird tickets available. 

The workshop is based upon a series of articles that I have written on creative and critical thinking part 1 here.  As part of the workshop I talk about using tactile tools to aid your creative thoughts, having objects you can hold and manipulate have been shown to improve creativity (Kinesthetic learning).  One part of the workshop introduces the game of Disruptus, which has very simple rules. You have about 100 flash cards which have drawings or photographs on and you choose a card at random. They even include some spare blank cards for you to create your own flash cards. An example of some of the cards can be seen below:



You then have a selection of action cards which have the following on them:
  •  IMPROVE
    • Make it better: Add or change 1 or more elements depicted on the card to improve the object or idea
    • EXAMPLE From 1 card depicting a paperclip: Make it out of a material that has memory so the paperclip doesn’t distort from use.
  • TRANSFORM
    • Use the object or idea on the card for a different purpose.
    •  EXAMPLE From 1 card depicting a high heel shoe: Hammer the toe of the shoe to a door at eye level and use the heel as the knocker.
  • DISRUPT
    • Look at the picture, grasp what the purpose is, and come up with a completely different way to achieve the same purpose.
    •  EXAMPLE From 1 card depicting a camera: Wear special contact lenses that photograph images with a wink of the eye.
  • CREATE 2
    •  Using 2 cards take any number of elements from each card and use these to create a new object or idea.
  •  JUDGES CHOICE
  •  PLAYERS CHOICE
For the purpose of this article I will only be looking at the first three.  You can either choose which action card you wish to use or use the dice that is provided with the game. The rules are simple you talk about how you have changed the original image(s) in accordance with the action card and a judge decides which is the best to decide the winner.  When I do this we do not have winners we just discuss the great ideas that people come up with, to encourage creativity there are no bad ideas.

The next step in the workshop is applying this to testing. Within testing there are still a great many people producing and writing test cases which are essentially checks. I am not going to enter into the checking vs testing debate here, however this game can be used if you are struggling to move beyond your ‘checks’ and repeating the same thing each time you run your regression suite. It can be used to provide ideas to extend your ‘checks’ into exploratory tests.  

Let us take a standard test case:
Test Case:  Login into application using valid username/passwordExpected result:  Login successful, Application screen is shown.
Now let us go through each of the action cards and see what ideas we can come up with to extend this into an exploratory testing session

  •  IMPROVE - Make it better: (Add or change 1 or more elements depicted on the card to improve the object or idea.)

Using the action described above can you think of new ways to test by taking one element from the test case?

Thinking quickly for 1 minute I came up with the following:
    • How we do start the application?  Is there many ways?  URL?  Different browsers? Different OS?
    • Is the login screen good enough or can it be improved (disability issues/accessibility)
    • What are valid username characters?
    • What are valid password characters?
    • Is there a help option to know what valid username/passwords are?
    • Are there security issues when entering username/password?
Can you think of more?  This is just from just stepping back for minute and allowing creative thoughts to appear.  (Remember there are no bad ideas)

Let us now look at another of the action cards.
  • TRANSFORM - Use the object or idea on the card for a different purpose.
What ways can you think of from the example test case above to transform the test case into an exploratory testing session?

Again we could look at investigating:
    • What alternatives are there to logging in to application? Fingerprint, Secure token, encrypted key?
    • Can we improve the security of the login code?
    • What security issues can you see with the login and how can you offer improvements to prevent these issues
It takes very little time to come up with many more ways in which you can transform the test case into something more than a ‘check’

Now for the next (and final for the purpose of this article):
  • DISRUPT - Look at the picture, grasp what the purpose is, and come up with a completely different way to achieve the same purpose.
I may have already touched upon some of the ideas on how to disrupt in the previous two examples, that is not a bad thing since if an idea appears in more than one area it could be an indication of an idea that may very well be worth pursuing.

Some ideas on disrupting could be:
    • Do we need a login for this? 
    • Is it being audited?
    • Is it an internal application with no access to the public?
I hope from this article you can see how such a simple game can help to improve your mental ability and testing skills, as Claire mentioned in her article.
Since software testing is a complex mental activity, exercising our minds is an important part of improving our work.
This is just a small part of the workshop and I hope you have enjoyed the article, if so I hope to see some of you soon when I run the full workshop. 

PS – I intend to run a cut down version of the workshop for the next Atlanta Testing Meet Up whilst I am here in the USA.  Keep a watch here for announcements in the near future.




Thursday, 21 November 2013

Mapping Enough to Test

I have seen on far too many occasions, whilst working in testing, people spending months gathering together information and creating test plans to cover every single requirement, edge case, corner case.  

Some people see this as productive and important work, in the dim and distance past I did too. I have learnt a lot since then and now I personally do not think this is as important as people try to make it out to be.  I see this as a waste of effort and time, time which would be better suited to actually testing the product and finding out what it is doing.  This is not to say that ‘no’ planning is the right approach to take,  rather that the test planning phase may be better suited to defining what you need to do, to start doing some testing.  It is more important to discover things that could block you or even worse prevent you from testing at all.  This article looks at the planning phase of testing from my own personal experiences and viewpoint.

A starting point for this article was after re-reading an article that Michael Bolton  wrote for a previous edition of the Sticky Minds magazine called ‘Testing without a map'.  Within this article Michael talked about using heuristics to help guide your testing effort, at the time of that article he suggested using HICCUPS to act as a guide to your testing and focus on inconsistencies.  This was about useful approaches when actually testing the product rather than on the planning phase.  This article focuses on the before you actually test.

Since the only way to know something is to experience it and by experiencing what the software is doing you are testing it.  My own experiences is that normally there is a delay between what is being developed and having something testers can test (yes even in the world of Agile) this is the ideal time in which we should and can do some test planning.  But what do we include in our plan?  If we follow the standard IEEE standards for test planning  we get the following areas recommended for inclusion in the test plan.

1.      Test Plan identifier – unique number, version, identification when update is needed (for example at x % requirements slip), change history
2.      Introduction (= management summary) – in short what will and will not be tested, references.
3.      Test items (derived from risk analysis and test strategy), including:
a.      Version
b.       Risk level
c.      References to the documentation
d.      Reference incidents reports
e.       Items excluded from testing
4.      Features to be tested (derived from risk analysis and test strategy), including:
a.      Detail the test items
b.      All (combinations of) software features to be tested or not (with reason)
c.      References to design documentation
d.      Non-functional attributes to be tested
5.      Approach (derived from risk analysis and test strategy), including:
a.      Major activities, techniques and tools used (add here a number of paragraphs for items of each risk level)
b.      Level of independence
c.      Metrics to evaluate coverage and progression
d.      Different approach for each risk level
e.      Significant constraints regarding the approach
6.      Item pass/fail criteria (or: Completion criteria), including:
a.      Specify criteria to be used
b.      Example: outstanding defects per priority
c.      Based on standards (ISO9126 part 2 & 3)
d.      Provides unambiguous definition of the expectations
e.      Do not count failures only, keep the relation with the risks
7.      Suspension and resumption (for avoiding wastage), including:
a.      Specify criteria to suspend all or portion of tests (at intake and during testing)
b.      Tasks to be repeated when resuming test
8.      Deliverables (detailed list), including:
a.      Identify all documents -> to be used for schedule
b.      Identify milestones and standards to be used
9.      Testing tasks for preparing the resource requirements and verifying if all deliverables can be produced.
10.   The list of tasks is derived from Approach and Deliverables and includes:
a.      Tasks to prepare and perform tests
b.      Task dependencies
c.      Special skills required
d.      Tasks are grouped by test roles and functions
e.      Test management
f.       Reviews
g.      Test environment control
11.   Environmental needs (derived from points 9 and 10) for specifying the necessary and desired properties of the test environments, including:
a.      Hardware, communication, system software
b.      Level of security for the test facilities
c.      Tools
d.      Any other like office requirements

WOW – if we did all of this when would we ever get time to test?  The problem is that in the past I have been guilty of blindly following this by using test plan templates and lots of cut and paste from other test plans.  

Why?  

This was how we had always done planning and I did not question if it was right or wrong or even useful.  Mind you in the back of my mind I would think why we are doing this, since nobody ever reads it or updates it as things change.  Hindsight is a wonderful thing!

My thoughts and thinking over what we really need to do when planning has changed drastically and now I like to do enough planning to enable a ‘thinking’ tester to do some testing of the product. The problem we face with our craft is that we make excuses to not do what we should be doing, which by the way, is actual testing.  We try to plan in far too much detail and map out all possible scenarios and use cases rather than on what the software is doing.  Continuing on with the theme of ‘The Map’ from the article by Michael Bolton, Alfred Kozbskit  once stated that

“The map is not the territory”

As a reader of this article what does that imply to you?

To me it was an epiphany moment, it was when I realised that we cannot, nor should not, plan what we intend to test in too much detail.   What Alfred was trying to say with this statement was that no matter how much you plan and how detailed your plan is it will never match reality.  In some ways it is like designing a map with a 1:1 scale.  How useful would you find this kind of map to get around?  Would it be of any use?  Would it actually map the reality of the world you can see and observe?  It would not be dynamic so anything that has changed or moved would not be shown.  What about interactive objects within the map?  They are constantly changing and moving and as such by the time you get hold of the map it is normally out of date.  Can you see how that relates to test plans?

What this means in the reality of software testing is that we can plan and plan and plan but that gives no indication on the reality of the testing that we will actually do. After having a discussion with Michael Bolton on Skype he came up with a great concept and said we need to split planning time up into test preparation and actual planning.

You need to spend some time getting ready to test, getting your environments, equipment, and automation in place, without this in place you could be blocked from actually starting to do some testing.  This is vital work and far more important than writing down step by step test scripts.

The purpose of testing is to find out information and the only way to do this is to interact with the application.  It is said that most things are discovered by exploration  and accident than by planning for something and that something happening is more than likely a coincidence.   The problem with doing too much planning is that it becomes out of date be the time you get to the end of your testing.  It is much better to have a dynamic adaptive test plan that changes as you uncovered and find more to test. One of the ways I have adopted this is by the use of mind maps and there have been many articles in the testing community about this subject, I would suggest if you want to know more about this is that you go and Google ‘mind maps and software testing’

The problem we have is that people are stuck in a mentality that test cases are the most important thing that needs to be done when we start to do test planning.  There is a need to move away from test cases towards missions (goals) something that you could do and achieve in a period of time and something that more importantly is reusable and that how it will be used will depend on the context and the person doing the mission.  When planning you only need to plan enough to start testing (as long as your test prep has been done) then when you test you will uncovered interesting information and start to map out what you actually see rather than what you thought you may see.  Your test plan will grow and expand as you become information and knowledge rich in what you find and uncover.

Jutta Eckstein in their article on planning andcontrolling complex projects makes the following statement: 
Accurate forecasts aren't possible because the world is not predictable

So it is wise to not plan too far ahead and plan only enough to do some testing find out what the system is doing and adjust your planning based upon the hard information you uncover.  Report this to those that matter.  The information you find could be what is valuable to the business.  Then look for more to test, you should always have a backlog and the backlog should never be empty.  The way in which I do this is to report regularly what we have found and what new missions we have generated based upon the interesting things we came across. I then re-factor my missions based upon
  • The customer priority – how important is it that we do this mission to the customer
AND
  • The risk to the project - if we did this mission and not one that we have planned to do next from the backlog what risk is this to the project?

Paul Holland discusses this approach in more detail via an article Michael Bolton wrote 

To summarise we need to think more about how much planning we do and think critically if producing endless pages of test cases during test planning is the best use of our resources.  We need to plan enough to do some testing and adapt our test plan based upon the information we uncover.  There is a need to re-evaluate what you intend to do often and adapt the plan as your knowledge of the system increases.  It comes down to stop trying to map everything and map just enough to give you a starting point for discovery and exploration.

*Many thanks to Michael Bolton – for being a sound board and providing some useful insights for this article.

Friday, 11 October 2013

Believing in the Requirements

Traditionally in testing there has been a large amount of emphasis placed upon ‘testing’ ‘checking’ the requirements.  An article by Paul Holland on Functional specification blinders  and my currently reading of Thomas Gilovich excellent book on How we know what isn’t so has made me re-think this strategy from a psychological perspective. I feel Paul was on the right track with his suggestions of not using the requirements/specification to guide your creative test idea generation but looking at alternatives.  However even these alternatives could cause limitations in your thinking and creative ideas due to the way we think.
The problem we have is that once we have been presented with any information our inbuilt beliefs start to play their part and look at any information with a bias slant.  We at built to look for confirmations that match our beliefs in other words we look for things we want to believe in.  So if believe the implementation is poor or the system under test has been badly designed we will look for things that confirm this and provide evidence that what we believe is true.  We get a ‘buzz’ when we get a ‘yes’ that matches our beliefs.  The same could apply when looking through the requirements we start to find things that matches our beliefs and at the same time the requirements (especially if ambiguous) start to influence our beliefs so that we, as Paul discovered, only look for confirmations of what is being said.  Once we have enough information to satisfy our beliefs we then stop and feel that we have done enough.
The other side of this is that any information that goes against our beliefs makes us dig deeper and look for ways to discount the evidence that is against what we believe.  When faced with evidence that is against what we believe we want to find ways to discount this information and find flaws in it.  The issue is that if we are looking at requirements or specification then normally there is not much that goes against our initial beliefs due to the historic influence that these documents can have.  So we normally do not get to the stage of digging deeper into the meaning of these documents.
As Thomas Gilovich stated
People’s preferences influence not only the kind of information they consider, but also the amount they examine.
If we find enough evidence to support our views then normally we are satisfied and stop.  This limits our scope for testing and being creative. My thoughts on how to get around this apart from following the advice Paul gives is one of being self-critical and questioning oneself.
When we are in a confirming our beliefs mode we are internally asking ourselves the following question
 “Can I believe this?”
Alternatively when we find information that does not match or confirm our beliefs we internally ask ourselves the following question
“Must I believe this?”
These questions are taken from the book by Thomas Gilovich referenced earlier and in this Gilovich states
The evidence required for affirmative answers to these two questions are enormously different.
Gilovich mentions that this is a type of internally framing we do at a psychological level, after reading this it reminded me to go back and read the article by Michael Bolton on Test Framing in which I attended a tutorial at the Eurostar Test Conference . I noted within the article by Michael that there appeared, IMO, a lot of proving the persons beliefs rather than disproving.  In other words many of the examples were answering the “Can I believe this” question.  This is not wrong and is a vital part of testing and I use the methods described by Michael a great deal in my day to day work.  I wonder if this topic could be expanded a little by looking at the opposite and trying to disprove your beliefs, in other words asking the “Must I believe this?” questions.
So moving forward I believe that we can utilize our biases here to our advantage to become more creative in our test ideas.  To do this we need to look at ways to go against what we belief is right and think more negatively.  The next time you look at a requirements or specification document ask yourself the following:
“MUST I BELIEVE THIS”
And see where this leads you.

PS – this article is a double edged sword – if you read this article you should now be asking “Must I believe this?”

Tuesday, 27 August 2013

The ‘Art’ of Software Testing

I was recently in Manchester, England when I came across the Manchester Art Gallery  and since I had some time spare I decided to visit and have a look around.  I have an appreciation for certain styles of art especially artists such as William Blake  and Constable.  During my visit I had a moment of epiphany.  Looking around the different collections, this appeared to be set out in a structured style, with different styles collated together.  Apart from an odd example of the famous “Flower Thrower" artwork by Banksy being placed in the seventeenth century art collection area.  I wondered if this was a deliberate action to cause debate.

What struck me when looking around was the fact that even though there were many similar painting techniques and methods being applied there was no standard size for any of the painting on display.  I looked around and could not find two paintings that appeared to have the same dimensions.  I even started to think if there was a common ratio being used and the so called golden ratio. Looking around quickly I could see some aspects of ratios being used but to my eyes it appeared that even though the artists used similar approaches and techniques they were ‘free’ to use these methods as a guide to producing their masterpieces.

This made me think about the debates in the field of software testing and how we should be taking on board engineering processes and practices.  If this is the case and we try to rein the imagination how are we supposed to hit upon moments of serendipity and be creative? I agree there needs to be structure and some discipline in the software testing world Session based testing takes account of this.

We have common methods and techniques that we can use in software testing but how we apply this should surely be driven by the context of the project.  I believe we need to stop or resist processes and best practices that hinder or suppress innovation and prevent those moments of enlightenment in which deeply hidden ‘bugs’ are discovered. Following pre planned and pre written test steps by rote can be useful but if we all follow the same path how many wonderful things are we going to miss.  I liken it to following a map or a GPS system and not looking around you at the amazing landscape that you are not noticing or appreciating.  In our field of software testing we must allow testers to explore, discover and sometimes find by accident.  We need to stop forcing processes, best practices and standards upon something which is uniquely human the skill of innovation and discovery.

The title of this article is homage to one of the first books I read about software testing by Glenford Myers – The Art of Software Testing


Friday, 12 April 2013

Creative and Critical Thinking and Testing Part 7

This is the final post in this series on creative and creative thinking and testing.  It has been a journey of discvoery for myself and along the way I have found out that there is more to how we think when testing than even I first thought and all of this came about from an initial posit-it note diagram. Along this journey we have:

Looked at the thinking required for

This final post of the series will look at the styles of thinking required when we are reporting our testing

Test Reporting

So after the planning execution and analysis you are now ready to report your finding.  The style of thinking required for this phase appears to be obvious in that you need to be creative in how you are going to present the information you have found.  You need to make sure that it is clear and easy for your reader to understand without any possible chance of it being misunderstood or more dangerously misused.  To do this you will need to think a little bit critically and ask yourself can the following about the information you are presenting:


  • Can it be interpreted in different ways?
  • Is the most important information clearly shown (in the context of what is important to the reader)?
  • Have I made the context clear?
  • If using numbers am I using this to back up a story?
  • Have I made the main risks and issues very clear?
  • Is what I am reporting ethnically and morally correct?

There are many more questions that you can ask yourself but the key of this level of critical thinking is to ensure you are objective about what you report and unbiased as possible  in your reporting.  There are a few methods that can used to help with reporting and we can learn a little from the medical world and how they use critical thinking in helping with the medical research reporting

“Students should also be encouraged to think about why some items need to be reported whereas others do not”

It is important to think about what should be not included since this aids clarity for the reader.
Returning to creative thinking one effective and creative way to represent your testing information is by the use of dashboards.

Dashboards
James Bach talked about a low tech testing dashboard on the rapid software testing course and has some information about it on his website

Low Tech Testing Dashboard Example – James Bach - taken from these slides

Del Dewar went slightly further and presented the dashboard as shown below.



More information on what each of the columns numbers or colours mean can be found on the links – in other words I want you to do a bit of research into dashboard and how useful they may be for your own test reporting.

From my own experience of using these styles of dashboards for test reporting what I found was that it gave a very quick overview of the project and of the issue but was not good at reporting progress which is something that test management required and this leads to storytelling and metrics.

Session Notes and Wiki Links
One more thing to add here is that when I tried the above with senior management teams there was a request for access to the data of what was actually tested, in other words they wanted access to the session notes and actual evidence of what had been done.  To solve this at the top level of each dashboard we provided a link to the wiki sessions where we kept all the session notes. I encourage you to have full transparency of testing effort and allow access to all who wants to see what testing has taken place and I feel it helps if there are no barriers or logins in the way for people to be able to access the raw data.

If as we described earlier in this document we are using session based test management then we should be producing evidence of what we have tested and the information we have found as we go along and test and we are using whatever is the best method for capturing this, video, screen capture, wiki.  This should be in a place in which all have access to and everyone (who matters) is aware of its location.

Storytelling
The next thing that you need to do with your test reporting is to tell a story or two.  This again requires some deep critical thinking.  Michael Bolton says that test reporting is about the telling of three stories at three levels.  I provide a quick summary of this below for full details refer to the original article link available here.


  • Story of the product – this is where you tell the story of the product using qualitative measures, using words and creative description of the product and what it does, did. What you found interesting.
  • Story about testing – this is used to back up your story of the product, what did you do when testing, what did you see and what did you cover.  It is about where you looked and how you looked and where you have as yet not looked.
  • Story about progress – the final story is the one about why you think the testing you did was good enough and why what you have not tested was less important (critical thinking)

Michael has a lot more information about test reporting in a great series of articles:



Markus Gärtner summaries this in his article titled “Tell a story at the daily stand up”.

As can be seen from the articles published by Michael Bolton you quickly switch from one style of thinking to the other depending on the context of the story you are telling.  This is a difficult skill for a tester to master but once you practice it you can become an expert test report story teller.

Another way in which you can be creative and report your testing as well as your test planning is by using mind maps. Darren McMillan produced a great article on this and it can be found here.

It is important also at this stage to remember about your test plan and look at what information you will need to update in this.  From what you found out during testing and how you risks and priorities may have changed need to be reflected in your test plan.

Qualitative vs Quantitative
There have been many discussions within the testing community about qualitative and quantitative measurements some of which I will share here as useful information.  It is very easy to fall into the trap that numbers tells the whole story and since they are easy to collect will provide all the information that management require.  I think we need to be careful of this and use our critical thinking to make a judgement on what the numbers really provide.

Cem Kaner has an excellent article on the validity of metrics here and the thing I most noted about this was the following:

“If our customers demand these metrics then we provide them but we have a moral & ethical duty to inform that they are flawed”

I agree with this but we need to use both our critical and creative thinking to provide the story to go with the metrics.  I think we all agree that quantitative measures are flawed but we need to be able think creatively to resolve this and provide the information the customer requires in a way which is simple and easy for them to understand without misleading anyone.

Some of the discussions within testing community of test metrics





NEXT

So you have got to the end of this article and hopefully have a understanding that the different stages of testing requires different types of thinking at different levels.  So what do you do now?  First of all this is not the end of the journey.  You now go back to the start and continue until told not to at the same time you can continue to practice some of the lessons and examples given in this document.  Improve them, be creative and create your own, adapt them to fit your needs.  This is not a document of best practice it is a springboard to help you, a reference guide, if you like, that can be modified and act as a starting point for you to improve and learn more about what style of thinking you may need when testing.

The important lesson is that testing requires a great deal of thinking and if you are not thinking when you are involved in any testing activity then maybe, just maybe, you may not be testing.

Enjoy.

John Stevenson

Thursday, 4 April 2013

Creative and Critical Thinking and Testing Part 6

The previous articles in this series have looked at what is critical and creative thinkingdefining the stages of testing, looking at the thinking required for documentation reviewtest planning and test execution.  This article looks at the test analysis phase and the styles of thinking that may be best for this stage as described in the diagram from the first article of this series

Test Analysis

So you have finished your testing session and gather together all sorts of evidence of what you have uncovered, discovered and learnt.  So now is the time to look in detail at what you have done and found and apply some critical thinking to the task.

This is one stage within testing which I feel is understated and often not a great amount of time and effort spent on it.  However I think it is one of the most valuable for the tester who carried out the testing since it allows them to analyse what they have done and to think critically about themselves and see if there improvements they can made.

It is interesting if you ‘Google’ test Analyst and ‘definition’ the variety of the responses that are returned.   A selection of abstracts is shown below:

“A test analyst reviews the results of process tests in a company's operating systems or manufacturing processes. The analyst also researches potential defects and works in tandem with engineers to provide solutions.” 
(Ehow.com – Test Analyst Job Description)
“In the position of test analyst, a person fulfils an important role by reviewing results from process tests in a business’s manufacturing or operating systems. The analyst will also research potential deficiencies and work together with engineers in order to provide solutions” 
 (Job is Job Test Analyst job description)
“The Senior Test Analyst is responsible for designing, developing, and executing quality assurance and control processes, test strategies, test plans and test cases that verify a software conformance to defined acceptance criteria (i.e. System behaviours) and feature design documents, as well as application standards”  
(Aus registry job specification Senior Test Analyst)
“Works with the Testing team to assist with the preparation of test plans and the testing of software in line with company guidelines and standards”  
(Vector UK Job description Test Analyst)

What I find interesting that there appears to be two main definitions of a test analyst one who analysers what has been tested and one that plans, executes and analysers the test results.  It appears over time that what could or may have been a specialist role has now become one which is interchangeable with the title of ‘tester’. There is nothing wrong with this and it may slightly digress from the purpose of this section but I thought it was a useful comparison of what the definition of what a test analyst is.

In my own world it is a stage that all testers need to be proficient in and having the necessary skills and thinking to carry out the task. The analytical skills of testers IMO appears to be a forgotten skill or one in which less importance is being placed.

DEBRIEF

The first thing that should be done after you have completed your test execution phase is to reflect on all you have done and the best way to do this is to debrief and talk to other people.  There are many ways that you can do this but one model within the session based testing management frame is by the use of PROOF  (expanded to PROOF LA).  This is a very useful and quick way to explain what you have done, what you need do and what you plan to do by the use of some simple questions.

  • Past. What happened during the session?
  • Results. What was achieved during the session?
  • Obstacles. What got in the way of good testing?
  • Outlook. What still needs to be done?
  • Feelings. How does the tester feel about all this
  • Learning.  What have you learnt? What could you do better?
  • Adaptation. What do you need to adapt or change?

If you working in small teams then it may not be possible to do this style of review however there is nothing to stop you doing some self-reflection and using these question to debrief yourself.  You may notice that these questions require both critical and creative thinking in equal measures. The questions will make you either think critically about what you have done or need to do against what you could do better and improvement ideas that your creative thinking would help generate.

WHAT DID YOU LEARN?

When you have completed the debrief it is important to spend a little more time thinking about the ‘new’ stuff you have learnt.  This is valuable information that could be used not just for you but for other people. Using the evidence you have gathered you could put together some ‘how to guides ‘if there were parts of what you did that was surprising or difficult.  This also aids your own memory and helps to reinforce your learning.  The added benefit is that others can look at this and use it to aid their own learning and understanding.  The way I implement this is by the use of a wiki in which for the project we are working on we have a section in which we link or create useful information.

DEFECTS

Looking at your testing evidence the next thing you may want to do is think critically about the issues you found which could be defects.  You may want to first try and repeat the issue to ensure it is reproducible. (Some might not be) You may want to talk to a developer, customer or architect to discuss if it is really a defect.  If after thinking about the issue you may then want to raise it as a defect and attach the evidence you captured during the test execution phase.  To create a good defect report requires a lot of critical thinking on the part of the tester and I would highly recommend that you participate in the Bug Advocacy course or at least work your way through the ‘free’ course material.  If you really want to get your defects fixed you need to present a compelling argument as to why this defect needs to be fixed, this course can help you achieve this.

*To do the Bug Advocacy course you need to complete the Black Box Software Testing foundations course first.

AUTOMATION

Once you have raised your defects you could now use a mixture of creative and critical thinking to see what of the evidence you have gathered what would prove to be useful to automate.  In this context I am talking about the use of BDD test frameworks.  At the same time it could be worth using some creative thinking to see what automation tools could help support your exploratory testing.  It is useful to remember that automation is not just a checking exercise but can provide invaluable to exploratory testing to aid their testing.  Michael Bolton wrote a blog post on this subject where he talks about manual and automated testing and the meaningless use of labels.

FUTURE IDEAS

One area in which people when carrying out exploratory testing appear to miss out on is looking for future opportunities to test.

We forget the definition of exploratory testing especially the “test design” part

“Simultaneous test design, execution and learning” 
 (Exploratory Testing explained)
If you did not make any notes of future things to test when carrying out the test execution phase then you may soon have no job to do!  This is an important aspect of exploratory testing and one in which you need to remain focused on when testing.

If you do have a list of ideas, this is the time to use some more critical thinking and see which ideas have value within the context of the project you are currently testing.  You can if you like give each idea a priority and a risk value if this helps your critical thinking of the value of the idea. Other ideas that may help to critical evaluate your test ideas is to discuss them other members of the team.  You could also apply some testing framing.  It should be noted that when you are critically thinking your future test ideas your creative thinking side may become active to come up with other novel ideas that you may be able to test.  You should also note these down as well since they could prove to be valuable.

Improvement

We are not perfect and constantly look for ways in which we can improve.  An aspect of the test analysis is to reflect on what you have done and think critically about what improvements you could make.

The field of social science and ethnographic research can be helpful here and I wrote an article on this. From this article I put together a way that tester can use reflection to help improve testing  An abstract of which can be seen below:

Reflect
Personal reflection:

  • Could you have done things better if so what? (Both from a personal and testing perspective)
  • Have you learnt new things about the product under test (That are not documented)?
  • Has your view of the product changed for better or for worse? Why has your view changed?

‘Epistemological reflexivity’ (What limits did we hit?)

  • Did your defined tests limit the information you could find about the product?  (Did you need to explore new areas that you had not defined)
  • Could your tests have been done differently? If yes how?
  • Have you run the right tests?
  • If you did things different what do you think you would have found out about the product?
  • What assumptions have you uncovered to be true/false?
  • Did the assumptions you make impede or help your testing?

The University of Plymouth produced an article of critically thinking reflection which has some useful ideas that may help when you come to reflect on your own improvements.  They have also produced a nice critically thinking poster.

 As you can see from the above for the test analysis phase you need mainly to be critical thinking with some creative thoughts since the majority of the work you do during this phase is to reflect on what you have done, need to do and how well have you done it.

The next section will look into the test reporting phase.