Sunday, 11 July 2010

Managing Exploratory Testing with Mercury Quality Center

I thought I would write about my experiences of using Mercury Quality Center (MCQ) to help manage my exploratory testing sessions

When carrying out exploratory testing I use the James and Jon Bach approach of Session based testing ( What I found is that the tool provided did not match the needs of the company and was hard to sell to management since we already had commercial tools for capturing testing effort (MQC). I had to re-think how I could get buy-in from management on using the exploratory testing approach whilst making use of the tools we already had.

One of the first things I did was implement a structure within the test plan section of MCQ. So I defined the following folder structure for each project

Project Name
Test Charter
Mission Statement
Test Ideas
Project Name -->
Test Charter -->
Mission Statement -->
Test ideas(s)

So under the planning section testers can define a folder name for the test charter they are working on and then add a folder for each mission statement and then add their test ideas.

The thinking behind this was at a glance anyone can see what has been covered under each test charter and see if their any gaps. Reports can be pulled off and used during debrief sections to act as focus points when discussing the testing that has been done.

I created a Test Plan Hierarchy using a standard numbering scheme for the folder and test idea names. This helped with traceability and navigation around the test plan.

Project Name -
01 – Test Charter 01 -->
01.01 – Mission Statement 01
01.01.01 – Test Idea 01
01.01.02 – Test Idea 02
01.02 – Mission Statement 02
01.02.01 – Test Idea 01
01.02.02 – Test Idea 02
02 – Test Charter 02 -->
02.01 – Mission Statement 01
02.02.01 – Test Idea 01
02.02.02 – Test Idea 02

MQC is setup for a formal test case and test step scripted form of testing, I have not found a way to get around this however instead of test cases I use test ideas and needed a quick way to create new test ideas without being bogged down in writing details about lots of steps. So I suggested that each test idea has ONLY the following information:
  • Test Idea Name
  • Test Idea description (This should be as descriptive as possible – include any models/heuristic thinking/problem solving ideas)
  • A single test step - This is required by MQC so that the user can run the test and record its status (Pass/Fail etc)

Since we use a different system for capturing defects (Don’t ask!) I also added a folder to each project called 99- Defects – so that I could trap any defects that needed testing.

The next step was to have a structure for the test lab (this is where details of tests are run)

I implemented the following structure:

Project Name -->
Project Release Version X.Y -->
01.01 - Mission Statement 01 -->
01.01.01 - Test idea 001
01.01.02 - Test case 002
01.02 - Mission Statement 02 -->
01.02.01 - Test idea 001
01.02.02 - Test case 002

It is recommended that X.Y numbers in Project Release Version name are provided as a multiple digit left zero padded integers. This is to ease sorting by name. This was basically copied over from the test plan section.

For exploratory testing I suggested that as a minimum the following columns are included when recording the execution of the test idea.
  • Plan.Test Name
  • Result
  • Defect (For recording CQ defects raised within that test script)
  • Priority * (How important is this test idea , what risk is it to the project by not doing this test idea)
  • Status
  • Execution Date
Once this had been setup it was then easy to run a session based upon a mission statement for the session I was running. Each mission statement had multiple test ideas. I found this very useful since it was very quick to create test focus areas based upon test charter names and mission statements. These could then very simply be turned into session sheets within MQC test lab.

One of the key elements of session based testing is to capture what all the evidence of the exploratory testing session. I implemented the following to capture details of what went on the testing session. Each test idea was run from within MQC and recorded if that test idea passed or failed. (I am aware this can be very subjective and depends on context however to ease transient to ET it is necessary to have some familiar ways of recording progress). I ensured that all session notes, log captures, screen prints, videos etc were captured by attaching them to the test idea.

THIS was very IMPORTANT – since if anyone needs to follow your test idea in the future they now have a record of what and HOW you executed your test idea. This is an issue with biases here and people carrying out testing afterwards could just follow your notes and repeat what you did which is not really exploratory testing but that can be mitigated by mentoring.

You now have a tool in which you can capture what you have done during your exploratory testing sessions.

There are a few issues I find with MQC and I am sure people out there in the testing community may have the answers. I want to use MQC to record the time spent on each session (As short, medium, long). I also I wanted to capture how much as a percentage of that time was spent on:
  • Test execution
  • Bug reporting and investigation
  • Test environment set up
  • Test data setup
This would help in the telling of the story of what is stopping the testers actually testing. I am sure there is a way to do this is MQC and I just need to do some more investigation. I hope readers of this find it useful, I know it has helped me to persuaded management to take exploratory testing seriously.

To finish this is working for me, it is not perfect and I am investigating other ways/tools that can make this more efficient. Looking at using a java application to create the session sheets and report back via the MQC API directly – but that is in the future. I am also investigating ways to customize MQC so that I can have the columns I wish to have. I will let you know it that works.


  1. Thanks for sharing this wonderful experience report John. When I was going thru your post I had the same questions in mind – How are you keeping track of time spent on each mission? Since it is vital is SBTM.

    I have used Quality Centre only as a bug database and so have limited knowledge of the tool :(. I might not be able to answer your questions. But, I have few more questions to you.

    How were the session notes captured? What did you use to capture the session notes? I have previously used session tester tool which was very helpful.

    Also, please keep sharing your experiences in using Quality Centre for SBTM.

    Sharath B

  2. Thank you for the comments Sharath

    With regards to your question about how to capture session notes. I tried using session tester tool but found it a little too restrictive and hindered the testing effort.

    I use a variety of methods from using paper note pads which are then scanned to using a standard text editor. If you system is windows based I have found remote desktoping into a machine that can be used to gather evidence (never use the same machine that you are testing on)works well. I then use something like powerpoint to save captured images and errors.

    The only problem with this is that it is a little unstructured but with proper mentoring and a good debrief this can easily be rectified.

  3. If you system is windows based I have found remote desktoping into a machine that can be used to gather evidence (never use the same machine that you are testing on)works well.

    John, are there any spcific reasons why you prefer an other PC to document and not the PC running the Software under test.

    Sharath B

  4. Hi Sharath

    There are a couple of important reasons not to use the machine that your are actually testing the software on.

    Software stability and experience: If the software you are testing has any problems which can cause a lock of the machine or a crash then there is a risk that you could lose all the evidence you have been gathering for your session. I have experienced this on several occasions so have now learnt my lesson

  5. Good points John. Will keep this in mind.

    We used session tester to record our notes when we were testing multimedia applicantions on a touchscreen board. So never had such issues.

    I guess IPad's, and digital scribble pads could use this as a good marketing stratergy :)

    Sharath B