I thought I would write about my experiences of using Mercury Quality Center (MCQ) to help manage my exploratory testing sessions
When carrying out exploratory testing I use the James and Jon Bach approach of Session based testing (http://www.satisfice.com/sbtm/). What I found is that the tool provided did not match the needs of the company and was hard to sell to management since we already had commercial tools for capturing testing effort (MQC). I had to re-think how I could get buy-in from management on using the exploratory testing approach whilst making use of the tools we already had.
One of the first things I did was implement a structure within the test plan section of MCQ. So I defined the following folder structure for each project
Project Name -->
Test Charter -->
Mission Statement -->
So under the planning section testers can define a folder name for the test charter they are working on and then add a folder for each mission statement and then add their test ideas.
The thinking behind this was at a glance anyone can see what has been covered under each test charter and see if their any gaps. Reports can be pulled off and used during debrief sections to act as focus points when discussing the testing that has been done.
I created a Test Plan Hierarchy using a standard numbering scheme for the folder and test idea names. This helped with traceability and navigation around the test plan.
Project Name -
01 – Test Charter 01 -->
01.01 – Mission Statement 01
01.01.01 – Test Idea 01
01.01.02 – Test Idea 02
01.02 – Mission Statement 02
01.02.01 – Test Idea 01
01.02.02 – Test Idea 02
02 – Test Charter 02 -->
02.01 – Mission Statement 01
02.02.01 – Test Idea 01
02.02.02 – Test Idea 02
MQC is setup for a formal test case and test step scripted form of testing, I have not found a way to get around this however instead of test cases I use test ideas and needed a quick way to create new test ideas without being bogged down in writing details about lots of steps. So I suggested that each test idea has ONLY the following information:
- Test Idea Name
- Test Idea description (This should be as descriptive as possible – include any models/heuristic thinking/problem solving ideas)
- A single test step - This is required by MQC so that the user can run the test and record its status (Pass/Fail etc)
Since we use a different system for capturing defects (Don’t ask!) I also added a folder to each project called 99- Defects – so that I could trap any defects that needed testing.
The next step was to have a structure for the test lab (this is where details of tests are run)
I implemented the following structure:
Project Name -->
Project Release Version X.Y -->
01.01 - Mission Statement 01 -->
01.01.01 - Test idea 001
01.01.02 - Test case 002
01.02 - Mission Statement 02 -->
01.02.01 - Test idea 001
01.02.02 - Test case 002
It is recommended that X.Y numbers in Project Release Version name are provided as a multiple digit left zero padded integers. This is to ease sorting by name. This was basically copied over from the test plan section.
For exploratory testing I suggested that as a minimum the following columns are included when recording the execution of the test idea.
- Plan.Test Name
- Defect (For recording CQ defects raised within that test script)
- Priority * (How important is this test idea , what risk is it to the project by not doing this test idea)
- Execution Date
Once this had been setup it was then easy to run a session based upon a mission statement for the session I was running. Each mission statement had multiple test ideas. I found this very useful since it was very quick to create test focus areas based upon test charter names and mission statements. These could then very simply be turned into session sheets within MQC test lab.
One of the key elements of session based testing is to capture what all the evidence of the exploratory testing session. I implemented the following to capture details of what went on the testing session. Each test idea was run from within MQC and recorded if that test idea passed or failed. (I am aware this can be very subjective and depends on context however to ease transient to ET it is necessary to have some familiar ways of recording progress). I ensured that all session notes, log captures, screen prints, videos etc were captured by attaching them to the test idea.
THIS was very IMPORTANT – since if anyone needs to follow your test idea in the future they now have a record of what and HOW you executed your test idea. This is an issue with biases here and people carrying out testing afterwards could just follow your notes and repeat what you did which is not really exploratory testing but that can be mitigated by mentoring.
You now have a tool in which you can capture what you have done during your exploratory testing sessions.
There are a few issues I find with MQC and I am sure people out there in the testing community may have the answers. I want to use MQC to record the time spent on each session (As short, medium, long). I also I wanted to capture how much as a percentage of that time was spent on:
- Test execution
- Bug reporting and investigation
- Test environment set up
- Test data setup
This would help in the telling of the story of what is stopping the testers actually testing. I am sure there is a way to do this is MQC and I just need to do some more investigation. I hope readers of this find it useful, I know it has helped me to persuaded management to take exploratory testing seriously.
To finish this is working for me, it is not perfect and I am investigating other ways/tools that can make this more efficient. Looking at using a java application to create the session sheets and report back via the MQC API directly – but that is in the future. I am also investigating ways to customize MQC so that I can have the columns I wish to have. I will let you know it that works.