Showing posts with label creative thinking. Show all posts
Showing posts with label creative thinking. Show all posts

Tuesday, 30 September 2014

Latest Chapter of Book Published - Being Creative

I have published the latest chapter of my book The Psychology of Software Testing entitled 'Being Creative'.  This has been one of the most enjoyable chapters I have worked on and one that I am very proud of.  The following is a short extract from this chapter.

_________________

What is Creativity

Creativity is just connecting things. When you ask creative people how they did something, they feel a little guilty because they didn’t really do it, the just saw something. It seemed obvious to them after a while."
 Steve Jobs - Wired Magazine

Many have a misconception of what being creative means.  Take a moment to note down some words that you feel describe creativity.

Did any of your words match the ones below?




Creativity can be all of these are more. It does not help that there are a variety of definitions of creativity for example:

"Creative thinking is the generation of new ideas.”

or

"Creativity is the ability to combine ideas, things, techniques or approaches in a new way." 

Doppelt gives a good reason why it is so difficult to define creativity:

"Creativity is one of the words in the English Language which means many things to many people.  At various times it may mean different things to the same person."Doppelt J E 2012. What is creativity?

There is no correct definition for being creative and that is wonderful in itself, since you have no barriers to being creative.

When we talk about creating ideas it does not necessarily mean creating or coming up with something new.  The idea  or concept you came up with is new it may not be game changing or revolutionary.  The majority of ideas come from existing ideas or a combination of different ideas to create a new idea.

Software testing involves large amounts of creative thinking and not just during the test planning phase.  When   testing software we use creative processes to discover, uncover and learn.  When testing we should utilize these natural creative processes to guide our direction and future opportunities to test.  The majority of testers do this without even being aware that this is happening.  If you are following a test script or a testing charter how often do you go off the beaten track because you thought of a new creative approach?

To put this in another way, how often do you find ways to test the software that is both novel and unique?  Capturing this creative process is useful since you then have a record of your thinking at that time, which can help to produce even more ideas.  The creative process is iterative and by creating new ideas you end up utilizing these ideas to create even more ideas.

______________________


This chapter also includes some extras
  • Creativity Cue Cards
  • Software Testing SCAMPER poster
  • Software Quality Characteristics Poster
  • A JavaScript ideas generator
I will be presenting some of this material at the London Tester Gathering Workshops:

Creative and Critical Thinking and Testing Workshop
Thursday 16th and Friday 17th October 2014

The Skills Matter eXchange
116-120 Goswell Road,
 London, 
EC1V 7DP, 
GB

Wednesday, 19 March 2014

ExpoQA 2014 - Madrid May 2014


As I mentioned in my previous post I will be speaking at the EXPOQA conference in Madrid from Monday 26th May until Thursday the 29th May 2014.

I will be running a creative and critical thinking workshop on Tuesday 27th May, which will be the second time I done this as a public event, previously I ran this at the Atlanta Tester Gathering meet Up.  Eric Jacobson did a great write up on his blog page.

Also I will be presenting a talk about Stop doing too much automation on the Thursday 29th May.  This will be the first time that this talk has been presented in public.

So if that alone does not tempt you to sign up how about 25% off the event?

The following code will give you a 25% discount off the event: SPEAKMULTI.

All you need to do is use the English registration Page http://www.expoqa.com/en/conference-inscripcion.php  and enter the code.

Follow the conference on twitter using @expoqa.

Look forward to seeing you there.

Wednesday, 29 January 2014

Using games to aid tester creativity

Recently Claire Moss  blogged about potty training and how this came about from a card game called Disruptus  I introduced to the Atlanta Testing meet up while I was in the USA.  This reminded me that I was going to blog about how I use this tool in a workshop and in my day to day testing to improve upon my own and teams testing ideas.  The workshop is a creative and critical thinking and testing workshop which I intend to deliver at the London Tester Gathering in Oct 2014 – early bird tickets available. 

The workshop is based upon a series of articles that I have written on creative and critical thinking part 1 here.  As part of the workshop I talk about using tactile tools to aid your creative thoughts, having objects you can hold and manipulate have been shown to improve creativity (Kinesthetic learning).  One part of the workshop introduces the game of Disruptus, which has very simple rules. You have about 100 flash cards which have drawings or photographs on and you choose a card at random. They even include some spare blank cards for you to create your own flash cards. An example of some of the cards can be seen below:



You then have a selection of action cards which have the following on them:
  •  IMPROVE
    • Make it better: Add or change 1 or more elements depicted on the card to improve the object or idea
    • EXAMPLE From 1 card depicting a paperclip: Make it out of a material that has memory so the paperclip doesn’t distort from use.
  • TRANSFORM
    • Use the object or idea on the card for a different purpose.
    •  EXAMPLE From 1 card depicting a high heel shoe: Hammer the toe of the shoe to a door at eye level and use the heel as the knocker.
  • DISRUPT
    • Look at the picture, grasp what the purpose is, and come up with a completely different way to achieve the same purpose.
    •  EXAMPLE From 1 card depicting a camera: Wear special contact lenses that photograph images with a wink of the eye.
  • CREATE 2
    •  Using 2 cards take any number of elements from each card and use these to create a new object or idea.
  •  JUDGES CHOICE
  •  PLAYERS CHOICE
For the purpose of this article I will only be looking at the first three.  You can either choose which action card you wish to use or use the dice that is provided with the game. The rules are simple you talk about how you have changed the original image(s) in accordance with the action card and a judge decides which is the best to decide the winner.  When I do this we do not have winners we just discuss the great ideas that people come up with, to encourage creativity there are no bad ideas.

The next step in the workshop is applying this to testing. Within testing there are still a great many people producing and writing test cases which are essentially checks. I am not going to enter into the checking vs testing debate here, however this game can be used if you are struggling to move beyond your ‘checks’ and repeating the same thing each time you run your regression suite. It can be used to provide ideas to extend your ‘checks’ into exploratory tests.  

Let us take a standard test case:
Test Case:  Login into application using valid username/passwordExpected result:  Login successful, Application screen is shown.
Now let us go through each of the action cards and see what ideas we can come up with to extend this into an exploratory testing session

  •  IMPROVE - Make it better: (Add or change 1 or more elements depicted on the card to improve the object or idea.)

Using the action described above can you think of new ways to test by taking one element from the test case?

Thinking quickly for 1 minute I came up with the following:
    • How we do start the application?  Is there many ways?  URL?  Different browsers? Different OS?
    • Is the login screen good enough or can it be improved (disability issues/accessibility)
    • What are valid username characters?
    • What are valid password characters?
    • Is there a help option to know what valid username/passwords are?
    • Are there security issues when entering username/password?
Can you think of more?  This is just from just stepping back for minute and allowing creative thoughts to appear.  (Remember there are no bad ideas)

Let us now look at another of the action cards.
  • TRANSFORM - Use the object or idea on the card for a different purpose.
What ways can you think of from the example test case above to transform the test case into an exploratory testing session?

Again we could look at investigating:
    • What alternatives are there to logging in to application? Fingerprint, Secure token, encrypted key?
    • Can we improve the security of the login code?
    • What security issues can you see with the login and how can you offer improvements to prevent these issues
It takes very little time to come up with many more ways in which you can transform the test case into something more than a ‘check’

Now for the next (and final for the purpose of this article):
  • DISRUPT - Look at the picture, grasp what the purpose is, and come up with a completely different way to achieve the same purpose.
I may have already touched upon some of the ideas on how to disrupt in the previous two examples, that is not a bad thing since if an idea appears in more than one area it could be an indication of an idea that may very well be worth pursuing.

Some ideas on disrupting could be:
    • Do we need a login for this? 
    • Is it being audited?
    • Is it an internal application with no access to the public?
I hope from this article you can see how such a simple game can help to improve your mental ability and testing skills, as Claire mentioned in her article.
Since software testing is a complex mental activity, exercising our minds is an important part of improving our work.
This is just a small part of the workshop and I hope you have enjoyed the article, if so I hope to see some of you soon when I run the full workshop. 

PS – I intend to run a cut down version of the workshop for the next Atlanta Testing Meet Up whilst I am here in the USA.  Keep a watch here for announcements in the near future.




Wednesday, 30 October 2013

A quick way to start doing exploratory testing

Whilst following the tweets from Agile Testing Days using the hashtag #agiletd I came across the following quote made by Sami Söderblom -  http://theadventuresofaspacemonkey.blogspot.co.uk/ during his presentation of 'Flying Under the Radar'

"you should remove 'expected results' and 'steps' from test cases"

Others attending the presentation tweeted similar phrases.
Pascal Dufour @Pascal_Dufour#agiletd @pr0mille remove expected result in testcases. Now you have to investigate to get the expected result.
Anna Royzman @QA_nnaRemove Expected Result from 'test case' - to start thinking @pr0mille #agiletd
Dan Ashby @DanAshby04"you should remove 'expected results' and 'steps' from test cases" - by @pr0mille at #AgileTD - couldn't agree more!!!
Pawel Brodzinski @pawelbrodzinskiRemoving the expected result thus safety net from the equation enables creativity. Ppl won't look for compliancy anymore. @pr0mille #agiletd
This was a WOW moment for myself, I have struggled sometimes to get people to adopt exploratory testing with people struggling to create charters and making them flexible enough.  It may not be an ideal solution but for me it is way in to teams that may be deeply entrenched in test cases and test scripts.

Thanks Sami - another creative idea that I most certainly will put into use.

Friday, 12 April 2013

Creative and Critical Thinking and Testing Part 7

This is the final post in this series on creative and creative thinking and testing.  It has been a journey of discvoery for myself and along the way I have found out that there is more to how we think when testing than even I first thought and all of this came about from an initial posit-it note diagram. Along this journey we have:

Looked at the thinking required for

This final post of the series will look at the styles of thinking required when we are reporting our testing

Test Reporting

So after the planning execution and analysis you are now ready to report your finding.  The style of thinking required for this phase appears to be obvious in that you need to be creative in how you are going to present the information you have found.  You need to make sure that it is clear and easy for your reader to understand without any possible chance of it being misunderstood or more dangerously misused.  To do this you will need to think a little bit critically and ask yourself can the following about the information you are presenting:


  • Can it be interpreted in different ways?
  • Is the most important information clearly shown (in the context of what is important to the reader)?
  • Have I made the context clear?
  • If using numbers am I using this to back up a story?
  • Have I made the main risks and issues very clear?
  • Is what I am reporting ethnically and morally correct?

There are many more questions that you can ask yourself but the key of this level of critical thinking is to ensure you are objective about what you report and unbiased as possible  in your reporting.  There are a few methods that can used to help with reporting and we can learn a little from the medical world and how they use critical thinking in helping with the medical research reporting

“Students should also be encouraged to think about why some items need to be reported whereas others do not”

It is important to think about what should be not included since this aids clarity for the reader.
Returning to creative thinking one effective and creative way to represent your testing information is by the use of dashboards.

Dashboards
James Bach talked about a low tech testing dashboard on the rapid software testing course and has some information about it on his website

Low Tech Testing Dashboard Example – James Bach - taken from these slides

Del Dewar went slightly further and presented the dashboard as shown below.



More information on what each of the columns numbers or colours mean can be found on the links – in other words I want you to do a bit of research into dashboard and how useful they may be for your own test reporting.

From my own experience of using these styles of dashboards for test reporting what I found was that it gave a very quick overview of the project and of the issue but was not good at reporting progress which is something that test management required and this leads to storytelling and metrics.

Session Notes and Wiki Links
One more thing to add here is that when I tried the above with senior management teams there was a request for access to the data of what was actually tested, in other words they wanted access to the session notes and actual evidence of what had been done.  To solve this at the top level of each dashboard we provided a link to the wiki sessions where we kept all the session notes. I encourage you to have full transparency of testing effort and allow access to all who wants to see what testing has taken place and I feel it helps if there are no barriers or logins in the way for people to be able to access the raw data.

If as we described earlier in this document we are using session based test management then we should be producing evidence of what we have tested and the information we have found as we go along and test and we are using whatever is the best method for capturing this, video, screen capture, wiki.  This should be in a place in which all have access to and everyone (who matters) is aware of its location.

Storytelling
The next thing that you need to do with your test reporting is to tell a story or two.  This again requires some deep critical thinking.  Michael Bolton says that test reporting is about the telling of three stories at three levels.  I provide a quick summary of this below for full details refer to the original article link available here.


  • Story of the product – this is where you tell the story of the product using qualitative measures, using words and creative description of the product and what it does, did. What you found interesting.
  • Story about testing – this is used to back up your story of the product, what did you do when testing, what did you see and what did you cover.  It is about where you looked and how you looked and where you have as yet not looked.
  • Story about progress – the final story is the one about why you think the testing you did was good enough and why what you have not tested was less important (critical thinking)

Michael has a lot more information about test reporting in a great series of articles:



Markus Gärtner summaries this in his article titled “Tell a story at the daily stand up”.

As can be seen from the articles published by Michael Bolton you quickly switch from one style of thinking to the other depending on the context of the story you are telling.  This is a difficult skill for a tester to master but once you practice it you can become an expert test report story teller.

Another way in which you can be creative and report your testing as well as your test planning is by using mind maps. Darren McMillan produced a great article on this and it can be found here.

It is important also at this stage to remember about your test plan and look at what information you will need to update in this.  From what you found out during testing and how you risks and priorities may have changed need to be reflected in your test plan.

Qualitative vs Quantitative
There have been many discussions within the testing community about qualitative and quantitative measurements some of which I will share here as useful information.  It is very easy to fall into the trap that numbers tells the whole story and since they are easy to collect will provide all the information that management require.  I think we need to be careful of this and use our critical thinking to make a judgement on what the numbers really provide.

Cem Kaner has an excellent article on the validity of metrics here and the thing I most noted about this was the following:

“If our customers demand these metrics then we provide them but we have a moral & ethical duty to inform that they are flawed”

I agree with this but we need to use both our critical and creative thinking to provide the story to go with the metrics.  I think we all agree that quantitative measures are flawed but we need to be able think creatively to resolve this and provide the information the customer requires in a way which is simple and easy for them to understand without misleading anyone.

Some of the discussions within testing community of test metrics





NEXT

So you have got to the end of this article and hopefully have a understanding that the different stages of testing requires different types of thinking at different levels.  So what do you do now?  First of all this is not the end of the journey.  You now go back to the start and continue until told not to at the same time you can continue to practice some of the lessons and examples given in this document.  Improve them, be creative and create your own, adapt them to fit your needs.  This is not a document of best practice it is a springboard to help you, a reference guide, if you like, that can be modified and act as a starting point for you to improve and learn more about what style of thinking you may need when testing.

The important lesson is that testing requires a great deal of thinking and if you are not thinking when you are involved in any testing activity then maybe, just maybe, you may not be testing.

Enjoy.

John Stevenson

Thursday, 4 April 2013

Creative and Critical Thinking and Testing Part 6

The previous articles in this series have looked at what is critical and creative thinkingdefining the stages of testing, looking at the thinking required for documentation reviewtest planning and test execution.  This article looks at the test analysis phase and the styles of thinking that may be best for this stage as described in the diagram from the first article of this series

Test Analysis

So you have finished your testing session and gather together all sorts of evidence of what you have uncovered, discovered and learnt.  So now is the time to look in detail at what you have done and found and apply some critical thinking to the task.

This is one stage within testing which I feel is understated and often not a great amount of time and effort spent on it.  However I think it is one of the most valuable for the tester who carried out the testing since it allows them to analyse what they have done and to think critically about themselves and see if there improvements they can made.

It is interesting if you ‘Google’ test Analyst and ‘definition’ the variety of the responses that are returned.   A selection of abstracts is shown below:

“A test analyst reviews the results of process tests in a company's operating systems or manufacturing processes. The analyst also researches potential defects and works in tandem with engineers to provide solutions.” 
(Ehow.com – Test Analyst Job Description)
“In the position of test analyst, a person fulfils an important role by reviewing results from process tests in a business’s manufacturing or operating systems. The analyst will also research potential deficiencies and work together with engineers in order to provide solutions” 
 (Job is Job Test Analyst job description)
“The Senior Test Analyst is responsible for designing, developing, and executing quality assurance and control processes, test strategies, test plans and test cases that verify a software conformance to defined acceptance criteria (i.e. System behaviours) and feature design documents, as well as application standards”  
(Aus registry job specification Senior Test Analyst)
“Works with the Testing team to assist with the preparation of test plans and the testing of software in line with company guidelines and standards”  
(Vector UK Job description Test Analyst)

What I find interesting that there appears to be two main definitions of a test analyst one who analysers what has been tested and one that plans, executes and analysers the test results.  It appears over time that what could or may have been a specialist role has now become one which is interchangeable with the title of ‘tester’. There is nothing wrong with this and it may slightly digress from the purpose of this section but I thought it was a useful comparison of what the definition of what a test analyst is.

In my own world it is a stage that all testers need to be proficient in and having the necessary skills and thinking to carry out the task. The analytical skills of testers IMO appears to be a forgotten skill or one in which less importance is being placed.

DEBRIEF

The first thing that should be done after you have completed your test execution phase is to reflect on all you have done and the best way to do this is to debrief and talk to other people.  There are many ways that you can do this but one model within the session based testing management frame is by the use of PROOF  (expanded to PROOF LA).  This is a very useful and quick way to explain what you have done, what you need do and what you plan to do by the use of some simple questions.

  • Past. What happened during the session?
  • Results. What was achieved during the session?
  • Obstacles. What got in the way of good testing?
  • Outlook. What still needs to be done?
  • Feelings. How does the tester feel about all this
  • Learning.  What have you learnt? What could you do better?
  • Adaptation. What do you need to adapt or change?

If you working in small teams then it may not be possible to do this style of review however there is nothing to stop you doing some self-reflection and using these question to debrief yourself.  You may notice that these questions require both critical and creative thinking in equal measures. The questions will make you either think critically about what you have done or need to do against what you could do better and improvement ideas that your creative thinking would help generate.

WHAT DID YOU LEARN?

When you have completed the debrief it is important to spend a little more time thinking about the ‘new’ stuff you have learnt.  This is valuable information that could be used not just for you but for other people. Using the evidence you have gathered you could put together some ‘how to guides ‘if there were parts of what you did that was surprising or difficult.  This also aids your own memory and helps to reinforce your learning.  The added benefit is that others can look at this and use it to aid their own learning and understanding.  The way I implement this is by the use of a wiki in which for the project we are working on we have a section in which we link or create useful information.

DEFECTS

Looking at your testing evidence the next thing you may want to do is think critically about the issues you found which could be defects.  You may want to first try and repeat the issue to ensure it is reproducible. (Some might not be) You may want to talk to a developer, customer or architect to discuss if it is really a defect.  If after thinking about the issue you may then want to raise it as a defect and attach the evidence you captured during the test execution phase.  To create a good defect report requires a lot of critical thinking on the part of the tester and I would highly recommend that you participate in the Bug Advocacy course or at least work your way through the ‘free’ course material.  If you really want to get your defects fixed you need to present a compelling argument as to why this defect needs to be fixed, this course can help you achieve this.

*To do the Bug Advocacy course you need to complete the Black Box Software Testing foundations course first.

AUTOMATION

Once you have raised your defects you could now use a mixture of creative and critical thinking to see what of the evidence you have gathered what would prove to be useful to automate.  In this context I am talking about the use of BDD test frameworks.  At the same time it could be worth using some creative thinking to see what automation tools could help support your exploratory testing.  It is useful to remember that automation is not just a checking exercise but can provide invaluable to exploratory testing to aid their testing.  Michael Bolton wrote a blog post on this subject where he talks about manual and automated testing and the meaningless use of labels.

FUTURE IDEAS

One area in which people when carrying out exploratory testing appear to miss out on is looking for future opportunities to test.

We forget the definition of exploratory testing especially the “test design” part

“Simultaneous test design, execution and learning” 
 (Exploratory Testing explained)
If you did not make any notes of future things to test when carrying out the test execution phase then you may soon have no job to do!  This is an important aspect of exploratory testing and one in which you need to remain focused on when testing.

If you do have a list of ideas, this is the time to use some more critical thinking and see which ideas have value within the context of the project you are currently testing.  You can if you like give each idea a priority and a risk value if this helps your critical thinking of the value of the idea. Other ideas that may help to critical evaluate your test ideas is to discuss them other members of the team.  You could also apply some testing framing.  It should be noted that when you are critically thinking your future test ideas your creative thinking side may become active to come up with other novel ideas that you may be able to test.  You should also note these down as well since they could prove to be valuable.

Improvement

We are not perfect and constantly look for ways in which we can improve.  An aspect of the test analysis is to reflect on what you have done and think critically about what improvements you could make.

The field of social science and ethnographic research can be helpful here and I wrote an article on this. From this article I put together a way that tester can use reflection to help improve testing  An abstract of which can be seen below:

Reflect
Personal reflection:

  • Could you have done things better if so what? (Both from a personal and testing perspective)
  • Have you learnt new things about the product under test (That are not documented)?
  • Has your view of the product changed for better or for worse? Why has your view changed?

‘Epistemological reflexivity’ (What limits did we hit?)

  • Did your defined tests limit the information you could find about the product?  (Did you need to explore new areas that you had not defined)
  • Could your tests have been done differently? If yes how?
  • Have you run the right tests?
  • If you did things different what do you think you would have found out about the product?
  • What assumptions have you uncovered to be true/false?
  • Did the assumptions you make impede or help your testing?

The University of Plymouth produced an article of critically thinking reflection which has some useful ideas that may help when you come to reflect on your own improvements.  They have also produced a nice critically thinking poster.

 As you can see from the above for the test analysis phase you need mainly to be critical thinking with some creative thoughts since the majority of the work you do during this phase is to reflect on what you have done, need to do and how well have you done it.

The next section will look into the test reporting phase.

Tuesday, 26 March 2013

Creative and Critical Thinking and Testing Part 5

The previous articles in this series have looked at what is critical and creative thinkingdefining the stages of testing, looking at the thinking required for documentation review and for test planning  This article looks in depth at the test execution stage and the style of thinking that may be best for this stage as described in the diagram from the first article of this series

Test Execution



When it comes to actually carrying out test execution and by that I mainly mean manual testing in an exploratory way by the use of session based test management.  There is a need to have a fairly even split between critical and creative thinking. When you are doing test execution you are carrying out



“Simultaneous test design, execution and learning” (James Bach)
When in the test execution phase the spilt between the two styles of thinking is evenly match there are times when you will need to think creatively and time when you need to think critically.  Switching between these different styles is not easy and it is a hard skill for a tester to master.  The more you practice the easier it does become.  However session based test management can be very powerful to help with your thinking styles since it encourages the use of focused and uninterrupted periods of time which is vital for both styles of thinking to be fully utilised.  To do this correctly it may be a good idea when doing test execution to do the following:

  • Switch off email
  • Unplug phone
  • Put up a sign saying “please do not disturb thinking testing in progress.

Carrying these simple things out may help to remain focused on your thinking process when testing.

It is not possible to do two thinking tasks at the same time not matter what you believe you think you can do. See here, here and here.

So you need to be creative in thinking what you may want to test next based upon what you are experiences whilst testing.  At the same time you are learning about the system and need to think critically about what assumptions you are coming up with from what you see and deciding if what you are doing is the best use of your time.

One method that can be used when testing is to look at test framing  as a critical thinking approach that can be useful during the test execution phase.   It provides you some questions that can engage your critical thinking side such as:

  • Why are you running (did you run, will you run) this test (and not some other test)?
  • Why are you running that test now (did you run that test then, will you run that test later)?
  • Why are you testing (did you test, will you test) for this requirement, rather than that requirement?
  • How are you testing (did you test, well you test) for this requirement?
  • How does the configuration you used in your tests relate to the real-world configuration of the product?
  • How does your test result relate to your test design?
  • Was the mission related to risk? How does this test relate to that risk?
  • How does this test relate to other tests you might have chosen?
  • Are you qualified (were you qualified, can you become qualified) to test this?
  • Why do you think that is (was, would be) a problem?

It should be noted that these questions can be used to during any of the testing stages.  I have found that test framing from my own experiences provides the most value during the test execution phases since the problem you are trying to frame is at the forefront of your thinking.

When carrying out your testing session it is important to think critical about the priority.  This is something that we tend not to think too much about when testing and mainly think about this during planning or when reporting bugs.  I feel when you are in the test execution phase it is important to use the framing techniques mentioned above  to adjust your priorities based upon what you are uncovering and the new information you are being presented with.  It is not a race to complete your testing sessions and slowing down just a little to think critically about the testing you are doing and thinking about the value of what you are doing at that moment and adjusting the priority based upon your thinking.  There are dangers within this approach in that your thinking could be clouded by your many biases , however this can be mitigated later during the test analysis phase.

On the creative side it is important to remember that one of the key parts of exploratory testing is to look for new opportunities to test and the best way to do this is to thinking creatively and as you uncover new stuff that you had not thought about testing before to make a note of this.  Note taking is important for effective testing.  If you do not make notes you will have a strong chance of forgetting that great idea or concept you had thought about at the time to test.  It is important to capture your thoughts and your ideas for future use.  This can be done in many ways from the creative use of video , capture and replay tools, wikis , mind maps , session recording tools (Session tester  Rapid reporter) to good old pen and paper .  If using pen and paper a good way to make your creative ideas stick is the use of drawing and images, I highly recommend the book ‘The back of the Napkin’ by Dan Roam for more information on this.  I also suggest talking to Andy Glover (78) (@cartoontester) since he has a lot of great thoughts and ideas on the use of creativity in testing.

How you capture is not important, it is what you capture that is important.  One issue I have had over the years of doing exploratory testing is how much is enough details for my notes.  One creative method I have found useful is at the end of each day I email my session notes to myself and the first email the next morning I read is my session notes.  If I cannot understand them at that time, less than 24 hours after I wrote them, how can I expect other people to understand them?  This has over the years enabled me to refine the amount of details I have in my session notes and become better at note taking.

As human beings we seem to be very afraid of failure and try to avoid being in situation in which we are wrong in our thoughts and ideas. In my opinion the best part of using exploratory testing is the ability to be able to test lots of your thoughts and ideas which you have just thought about when testing and not being too bothered if your ideas are right or wrong.  It is about proving your theories and it costs very little if they prove to be wrong.  We need to have and test lots and lots of our ideas to help us understand more about the system in doing this we are going to lots of failures of our ideas and in many cases we are going to be proved wrong.

I have wrote previously about placing many bets  and how useful this is for testing.  Franz Johansson in his book ‘The Click Moment’ explains more about the theory being this thinking.  To be creative we need to embrace failure and see the learning opportunities this provides us. There is an interesting article on this concept of failure helping us to learn more which has been given the title of ‘Productive failure’.  It appears that we can learn more and become better by being allowed to fail rather than only being correct.  There are other articles on the fact that allowing us to be confused is good and encourages us to learn more.  So when you are testing it might be good sometimes to be confused about the software since it may help you learn more.

So to summarise the execution phase as a tester you need to use both styles of thinking equally.  You need to be creative in how you test and what ideas you come up with to test at that time and in the future. Alongside this you need to think critically about what you are currently testing and is what you are doing right there and then the right thing and do your biases influence your approach.   It is not easy being to switch between these two styles of thinking during a period of testing and maybe this explains why some people when testing feel mentally exhausted after a testing session since there brain have been engaged in a constant battle between creative and critical thinking.

The next section will look at the style of thinking needed to analyse the testing that has been done and the evidence you have gathered.

Wednesday, 20 March 2013

Creative and Critical Thinking and Testing Part 4

The previous articles in this series have looked at what is critical and creative thinkingdefining the stages of testing and looking at the thinking required for documentation review.  This article looks in depth at the test planning stage and the style of thinking that may be best for this stage as described in the diagram from the first article of this series

Test Planning

The traditional approach to test planning has involved mapping requirements to test cases and creating test scripts with steps and expected results.  This activity has started to be replaced with the use of automation especially using frameworks such as cucumber and behaviour driven development  methods.  In conjunction with this testers use the exploratory testing approach of creating charters and missions to create test ideas. During test planning testers should be primary using creative thinking with some critical thinking.

Since the cognitive outcomes are dependent on thinking of ideas, new things/ways to test then the best way to achieve this is by the use of creative thinking.  Testing is very good at finding ways in which the software does not work and our role is to exercise the software the best we can and by planning at a high level novel and interesting ways to do this based upon our skills, knowledge and experience is a good exercise in creative thinking.

There is some overlap here with test execution since when we test we find more to test and whilst following our mission find more creative things to do.

So to encourage our creative thinking we need to use tools and approaches that are more suited to this way of thinking.

Mind Maps

One way to do this is to not have too much structure and allow the person to be able to follow their creative processes.  The use of mind maps allows people to capture their thoughts and ideas.  They then can amend, group, ungroup, change the information they have thought about in a way that encourages and follows their creative thinking process.  It provides a visual reference of their ideas and allows them to see a bigger picture. There are many articles here, here and here on the use of mind maps to aid creative thinking but to quote from one
“Mind mapping is one of the most powerful tools for capturing and cultivating ideas” Mind maps and creative thinking 
Currently the majority of teams, I have been involved with, are working with Freemind as the de-facto tool for mind mapping within the organisation.  There are a great many articles on the use of mind maps for test planning and for those interested you can read more here, here and here.

During the planning stage there are occasions when you suddenly become blank and cannot think of new things that you plan to test.  This happens in all creative activities (commonly known as writers block) however there are tools and techniques that can help to overcome and that is what we shall look at next.

Checklists and cheat sheets

One thing that can help generate ideas is the use of checklists and cheat sheets.  There are many examples of these within testing and a few that can be recommended are:

Brainstorming

There are arguments for and against using brainstorming for improving the creative process.  However the research appears to side with the fact that collaboration with others can has a positive effect on creativity as long as the people involved encourage and support rather than criticize and say negative things.  It has to be handled in a sensitive way for collaboration to work when thinking creatively.   It is important for creative to work correctly to be in a creative environment and there are many articles giving suggestion on how this can be achieved (Creating a creative environment for braining storming), (Creating innovate workplaces).   The benefits are that another person’s perspective can trigger a new path to follow or spot something you may have missed or does not make sense.

If you are asked to review someone’s creative work and ideas you have a vital role to ensure you do not become too negative and support and encourage their ideas.  It is very easy to squash someone  creative side by using negative throwaway comments.  I am not saying there is no need for debate and discussion but it needs to be discussed with empathy and respect.

There are some interesting articles about feedback and they can be found here, here and here.

Test Tours and Personas

Within testing there is a concept of testing tours which have been around for quite a while and they can prove useful for helping to drive creativity by placing .  They are many useful tours which can help you think about a certain function or feature and find novel ways to test it.  Some examples include:
  • Documentation Tour: Look in the on-line help or user manual and find some instructions about how to perform some interesting activity. Do those actions. Improvise from them.
  • Feature tour: Move through the application and get familiar with all the controls and features you come across.
  • Obsessive compulsive tour: Perform tasks multiple times, perform tasks multiple times, perform tasks multiple times
  • Back alley tour: Test the least-used features
  • Testability tour: Find all the features you can use as testability features and/or identify tools you have available that you can use to help in your testing.
  • Continuous Use: While testing, do not reset the system. Leave windows and files open. Let disk and memory usage mount. You’re hoping that the system ties itself in knots over time.
There are many more and I have provide some links to them here , here, here and here.  These can be used during the planning stages to help you focus on certain areas and with creative thinking devise novel ways to ‘tour’ around the application

Testing personas are fictional people who represent a typical type/group of users who will use the software.  When you are thinking creativity you can come up with many different types or groups of people and then note some ideas of how that persona would test the software. Similar to testing tours there have been many articles on using and creating testing personas.  Some of which I have added here:
There are unlimited personas that can be created and you are only limited by your imagination and hopeful these references will provide you with enough information to create your own.

The “what if” question


One find way to help generate ideas for testing is to ask yourself questions in which you do not know the answer but think there would be benefit in finding out the answer.

For example:
  • What if I tried this?
  • I wonder what would happen if I did this?
  • What if there was no (add your own word here)?
  • How many ways can I do this?
  • Can I make it do the opposite?
  • Is there anything that I can change that may affect the software?
These are good starting points if you feel you are running out of ideas and can help create opportunities for more avenues to explore.

Automation

One part of the test planning is to see what information you already know about and look at ways to automate this using BDD or any other method that works for you.  You need to use critical thinking to see which of your creative ideas would be most suitable to automate and look at creating a list of these possible automation opportunities.

Critical thinking of creative ideas


One important note to make about creative thinking is that once you have created your ideas you must then use some critical thinking to ensure that your ideas are sound.  As Ken Robinson points out in his book “Out of our minds”

“Creativity is not only about generating ideas; it involves making judgements about them. The process includes elaborating on the initial ideas, testing and refining them and even rejecting them, in favour of others that emerge during the process.” (Our of Our Minds - Ken Robinson)

The next article will look at test execution and the thinking style that might be best for that.

*Oops - Had to do some editing of the article and add some references that were missing.

Friday, 15 March 2013

Creative and Critical Thinking and Testing Part 3

The previous articles in this series have looked at what is critical and creative thinking and defining the stages of testing.  This article is the start of looking at each stage in detail and advising which may be the best type of thinking to apply at that stage as described in the diagram from the first article of this series.

Documentation review

Even though we are moving towards a more ‘Agile’ style of software development it does not mean here should be no documentation and for complex projects it can be vital.  There are various documents that can be created and the ones most commonly accessed by testers are the requirements, design specifications and high level design.  The old schoolers may remember this being referred to as ‘Static Testing’, 'Inspections and walkthroughs.

One of the early stages of testing is for testers to review requirement documents from a testability perspective.  When reviewing the document the tester should be asking questions about the statements been made and critical thinking about what has been stated.  This stage of testing for the tester should mainly be a critical thinking exercise with some aspects of creative thinking.

The question is how do we apply critical thinking to documentation review?

One way is by the use of check lists and heuristics (rules of thumb) to prompt our thoughts on the testability of the requirements.  One example of this is shown below:

  • Do questions need to be asked of the requirement before a test can be derived? If so, it is incomplete.
  • Are there two or more requirements that are in conflict with each other? If yes, they are inconsistent.
  • Can the requirement be interpreted in more than one way? If it can it is ambiguous.
  •  Does the requirement contain the words AND, OR, NOR, IF-THEN-ELSE? If it does, it is likely to be non-compound.
  •  Does the requirement fail to comply with any of the five criteria? If yes, then it is not testable.
  • Does the requirement deal with the ‘what’ rather than the ‘how’ (e.g. design)?
  • For example, the requirement ‘Provide a database’ states how a function should be implemented rather than what it is that is required. It should instead read ‘Provide the capability for traceability between requirements’
  • Is the requirement written as an operation rather than a requirement?
  • For example, ‘It will be stored in a x20 rack’ describes the operation rather than the environment. It should read ‘It shall be designed for storage in a x20 rack’.
  • Is the requirement written using the correct terms?
  • For example, the terms shall, will and should: Requirements use shall Statements use will Goals use should
  • Does the requirement contain the words is, was or must? These should be removed.
  • Are there any requirements that are missing? (e.g. reliability, quality)

Other sources/check lists that can be used to encourage critical thinking of the document under review include:

Software Quality Characteristics this is useful for spotting missing information from requirements by asking questions such as:

  • Diagnostics: is it possible to find out details regarding customer situations?
  • Compatibility: does the product comply with common interfaces or official standards?
  • Standards Conformance: the product conforms to applicable standards, regulations, laws or ethics.

There are some similarities between the heuristics and models you use when following test execution and when you are reviewing documentation since your mind is trying to flow through how the program with function so some of the model you use for test execution can also apply to reviewing test documentation.

For example you can use the consistency heuristics created by Michael Bolton and James Bach.  HICCUPPS

  • History
  • Image
  • Comparable products
  • Claims
  • User experience
  • Product
  • Purpose
  • Statues

Another example that can be used comes from the rapid software testing course and the lessons learned in software testing book and the use of Reference, Inference and Conference. A good article about this can be found here

  • Reference: Use all documents and ensure they agree
  • Inference: What are my assumptions about the requirements?  Are my assumptions correct?
  • Conference: speak to the rest of the implementation team about the issues around testing. 

I am not suggesting using all of these methods but to mix and match and choose which one works best for you.  You may choose not to use any of them and create your own (this is the creative thinking element) If you do create your own it would be great to share with others.

So let us try to apply this in practice.
  • Incomplete example:
  • The system shall restrict access’
    Should be rewritten as: ‘The system shall control access via usernames and passwords’
  • Consistency example:
  • ‘The system shall calculate all distances in miles’ ‘
    The system shall calculate all speeds in km per hour'
    These should be rewritten as: ‘The system shall calculate all distances in miles’ ‘The system shall calculate all speeds in miles per hour’
  • Inaccurate example:
  • Requirement: ‘All transactions shall be improved by 1 second’
    Customer actually requires faster logon times
    Requirement: ‘Daily average user logon shall be improved from 5 seconds to 4 seconds
  • Ambiguous example:
  • ‘An employee is entitled to 4 weeks holiday a year’.
     Should be redefined as: ‘An employee is entitled to 20 working days holiday per calendar year
  • Embedded (compounded) example:
  • ‘The target platform for the server system shall be Windows 2000 and Windows NT for the client system’
    Should be separated into 2 requirements:
    ‘The target platform for the server shall be Windows 2000’
    ‘The target platform for the client shall be Windows NT4’
Another useful check list that could be used is the one from the Ron Patton book on Software Testing which is talking about reviewing design specifications

A good, well-thought-out product specification, with "all its t's crossed and its i's dotted," has eight important attributes:

  • Complete. Is anything missing or forgotten? Is it thorough? Does it include everything necessary to make it stand alone?
  • Accurate. Is the proposed solution correct? Does it properly define the goal? Are there any errors?
  • PreciseUnambiguous, and Clear. Is the description exact and not vague? Is there a single interpretation? Is it easy to read and understand?
  • Consistent. Is the description of the feature written so that it doesn't conflict with itself or other items in the specification?
  • Relevant. Is the statement necessary to specify the feature? Is it extra information that should be left out? Is the feature traceable to an original customer need?
  • Feasible. Can the feature be implemented with the available personnel, tools, and resources within the specified budget and schedule?
  • Code-free. Does the specification stick with defining the product and not the underlying software design, architecture, and code?
  • Testable. Can the feature be tested? Is enough information provided that a tester could create tests to verify its operation?

There are many other areas within the review phase in which critical thinking can play an important part which we have not touched on within this article and I suggest people reading this article to go and investigate more where testers  can add value to a development project by thinking critical.  We can influence such areas as code reviews, walkthroughs and inspections.

The next article will look in depth at Test Planning and the style of thinking required for that stage.

Monday, 11 March 2013

Creative and Critical Thinking and Testing Part 2

The last blog post looked at what critical and creative thinking was and which may be most useful during different stages of testing.  This post will start by giving an overview of each stage and then look in depth at each of the stages.

Stages of testing.

Before we start to look at how we can use these types of thinking within testing we need to breakdown the testing life cycle into manageable stages. This is not an expansive list of the many stages but the labels given are to give a high level overall view and simplify the classification for which style of thinking is most suited for each phase.  It not meant to state that these are the only things we need to thing about in stages nor does it mean that there is not any cross over between stages.  They are meant as ‘guide’ lines only and not as best practice, process or otherwise.

Documentation Review (Static Testing)

  • Requirements
  • High Level Design (HLD)
  • Feature documents
  • Specifications
  • Standards
  • Regulations
  • Code review
  • Walkthroughs
  • Inspections
  • Oracles
Documentation review does not just mean reviewing requirement documents it could mean reviewing code, database models, country laws and standards.  Requirements are only ONE source of information (Oracle)   Instead of going into great detail about what role testing plays in the documentation review stage there are many resources available on-line that can do a far better job than me.  I highly recommend that people look at these links if they wish to know more about static testing and documentation review.

Cem Kaner - Testing Computer Software book - has an excellent section on document review 
Ron Patton - Software Testing - Chapter 4 - examining the specification

* Disclaimer - I may not agree with all the approaches;  processes and methods being suggested in the above links but they provide information about the diversity of what documentation review is in the context of software testing

Test Planning

  • Test Ideas
  • Automation – feature files (cucumber)
  • Missions/Charters
  • Mind mapping
The test planning stage is one which is much maligned with people within the testing community.  There are people making all sorts of statements about the purpose of test planning with some saying to do as little as possible (Agile style) with others saying it should be wrapped in best practice and process and standards. In my own world there is a important need for planning but there is a case that says do not do too much.  I feel that with test planning you should do enough to enable you to start doing some actual testing. Once you start testing you can then update and maintain your test plan based upon what you experience and discover.  I try to align myself to James Bach's approach to test planning which can be found here page 21.

Test Execution

Now we come to the key element of testing, the part where the tester should engage their brain and start to ask questions of the software.  I am talking about manual testing and specifically exploratory testing.  There are many articles on what is exploratory testing and how to do it and rather than repeat information I will provide just one link to the excellent resource by Michael Bolton here.  This is one page I do have bookmarked and come back to time and time again.

Test Analysis

  • Bug investigation
  • Defect reporting
  • Repeatability
  • Questioning
  • Future Automation
  • Debrief
  • Future missions/Charters

This is the often forgotten about stage of testing in which the tester takes a little time to reflect on what they tested, did they do the best they could within the constraints they had? Could they have done things differently?  This time could be spent reviewing the session notes and investigating issues they uncovered during the execution phase, that they thought could be bugs.  It could be used to check if a bug was repeatable and then raised in a defect tracking system with full evidence of how to recreate.  It could be looking at what may be useful to automate from what information was discovered.  It could be adding new ideas from the notes for future testing.  It is also the time to debrief and talk to others about what you did.

Test Reporting

  • Dashboard
  • Wiki – sessions
  • Updating plan
  • Qualitative
  • Quantitative
Test reporting is a highly controversial subject within the testing community with lots of discussion about the use of metrics.  There are some great articles out there on this subject and I am not going to attempt to summaries it here.  If you want to understand more about the metrics debate then can I suggest you start with this blog post of mine.  Then have a look at some of the following articles from Michael Bolton's resources.
To me the best way to do test reporting is the telling of a story using numbers to help support the story. A good way to do this is by the use of dashboards, a great example of this can be found here.

How does this apply to testing?

So now we have our testing stages we can start to see which style of thinking would be most suited for this stage, not forgetting that the other style will and should still be used, but the focus should be on the primary style of thinking for that stage.

The next article in this series will look in depth at documentation review and the style of thinking that may be most suited for this phase