Wednesday, 29 January 2014

Using games to aid tester creativity

Recently Claire Moss  blogged about potty training and how this came about from a card game called Disruptus  I introduced to the Atlanta Testing meet up while I was in the USA.  This reminded me that I was going to blog about how I use this tool in a workshop and in my day to day testing to improve upon my own and teams testing ideas.  The workshop is a creative and critical thinking and testing workshop which I intend to deliver at the London Tester Gathering in Oct 2014 – early bird tickets available. 

The workshop is based upon a series of articles that I have written on creative and critical thinking part 1 here.  As part of the workshop I talk about using tactile tools to aid your creative thoughts, having objects you can hold and manipulate have been shown to improve creativity (Kinesthetic learning).  One part of the workshop introduces the game of Disruptus, which has very simple rules. You have about 100 flash cards which have drawings or photographs on and you choose a card at random. They even include some spare blank cards for you to create your own flash cards. An example of some of the cards can be seen below:



You then have a selection of action cards which have the following on them:
  •  IMPROVE
    • Make it better: Add or change 1 or more elements depicted on the card to improve the object or idea
    • EXAMPLE From 1 card depicting a paperclip: Make it out of a material that has memory so the paperclip doesn’t distort from use.
  • TRANSFORM
    • Use the object or idea on the card for a different purpose.
    •  EXAMPLE From 1 card depicting a high heel shoe: Hammer the toe of the shoe to a door at eye level and use the heel as the knocker.
  • DISRUPT
    • Look at the picture, grasp what the purpose is, and come up with a completely different way to achieve the same purpose.
    •  EXAMPLE From 1 card depicting a camera: Wear special contact lenses that photograph images with a wink of the eye.
  • CREATE 2
    •  Using 2 cards take any number of elements from each card and use these to create a new object or idea.
  •  JUDGES CHOICE
  •  PLAYERS CHOICE
For the purpose of this article I will only be looking at the first three.  You can either choose which action card you wish to use or use the dice that is provided with the game. The rules are simple you talk about how you have changed the original image(s) in accordance with the action card and a judge decides which is the best to decide the winner.  When I do this we do not have winners we just discuss the great ideas that people come up with, to encourage creativity there are no bad ideas.

The next step in the workshop is applying this to testing. Within testing there are still a great many people producing and writing test cases which are essentially checks. I am not going to enter into the checking vs testing debate here, however this game can be used if you are struggling to move beyond your ‘checks’ and repeating the same thing each time you run your regression suite. It can be used to provide ideas to extend your ‘checks’ into exploratory tests.  

Let us take a standard test case:
Test Case:  Login into application using valid username/passwordExpected result:  Login successful, Application screen is shown.
Now let us go through each of the action cards and see what ideas we can come up with to extend this into an exploratory testing session

  •  IMPROVE - Make it better: (Add or change 1 or more elements depicted on the card to improve the object or idea.)

Using the action described above can you think of new ways to test by taking one element from the test case?

Thinking quickly for 1 minute I came up with the following:
    • How we do start the application?  Is there many ways?  URL?  Different browsers? Different OS?
    • Is the login screen good enough or can it be improved (disability issues/accessibility)
    • What are valid username characters?
    • What are valid password characters?
    • Is there a help option to know what valid username/passwords are?
    • Are there security issues when entering username/password?
Can you think of more?  This is just from just stepping back for minute and allowing creative thoughts to appear.  (Remember there are no bad ideas)

Let us now look at another of the action cards.
  • TRANSFORM - Use the object or idea on the card for a different purpose.
What ways can you think of from the example test case above to transform the test case into an exploratory testing session?

Again we could look at investigating:
    • What alternatives are there to logging in to application? Fingerprint, Secure token, encrypted key?
    • Can we improve the security of the login code?
    • What security issues can you see with the login and how can you offer improvements to prevent these issues
It takes very little time to come up with many more ways in which you can transform the test case into something more than a ‘check’

Now for the next (and final for the purpose of this article):
  • DISRUPT - Look at the picture, grasp what the purpose is, and come up with a completely different way to achieve the same purpose.
I may have already touched upon some of the ideas on how to disrupt in the previous two examples, that is not a bad thing since if an idea appears in more than one area it could be an indication of an idea that may very well be worth pursuing.

Some ideas on disrupting could be:
    • Do we need a login for this? 
    • Is it being audited?
    • Is it an internal application with no access to the public?
I hope from this article you can see how such a simple game can help to improve your mental ability and testing skills, as Claire mentioned in her article.
Since software testing is a complex mental activity, exercising our minds is an important part of improving our work.
This is just a small part of the workshop and I hope you have enjoyed the article, if so I hope to see some of you soon when I run the full workshop. 

PS – I intend to run a cut down version of the workshop for the next Atlanta Testing Meet Up whilst I am here in the USA.  Keep a watch here for announcements in the near future.




Monday, 27 January 2014

Measuring Exploratory Testing

A quick post on a concept we are working on within our company

One of the difficulties I have found with implementing Exploratory Testing is a way to measure how much testing you have done at a high level (stakeholders).  This article looks at this problem and tries to provide a solution, it should be noted that there are good ways currently of reporting automation (checking) and for this article that will be out of scope.

The current way of we manage exploratory testing is by using time boxed sessions (session based test management) and for reporting at a project level we can (and do) use dashboards.  This leaves the question open of how much testing (exploratory) has been done against the possible amount of testing time available.

After having discussions with some work colleagues, we came up with the following concept (this was a great joint collaboration effort,I cannot claim the ideas as just mine).  The basic concept of session based test management is that you time box your exploration (charters) in to sessions where one session equates to one charter (if you have not come across the terminology of charters then refer to the session based test management link) To simplify we use an estimation that one session is half a day (sometime you do more, sometimes less), therefore we now have a crude way to estimate the possible number of charters you could run in a period of time.

For example if you have a sprint/iteration of two weeks you have per person a possible number of sessions you could run of 20 sessions, if you have 5 testers then you could have a total possible number of sessions of 5 * 20 = 100 sessions within your sprint.  No one on a project would be utilized like this for 100% of the time so the concept that we came up with is that for your project you set a target of how much of your time you want your team to be doing exploratory testing.  The suggestion is to begin with to set this to a value such as 25%, with the aim to increase this as your team moves more and more into automation for the checking and exploratory for the testing, the goal being a 50/50 split between checking and testing.

Using the example above we can now define a rough metric to see if we are meeting our target (limited by time)

If we have 2 weeks and 5 testers and a target of 25% exploratory we would expect by the end of the two weeks if we are meeting our target to have done: 25 exploratory sessions.

We can use this to report at a high level if we are meeting our targets within the concept of exploring, within a dashboard as shown below,

Possible sessions100
% Target Sessions25%
Number of actual sessions25
% Actual Target25%
Following this format we can this using colours to indicate if we are above or below our target: (red/green)

Possible sessions100
% Target Sessions25%
Number of actual sessions15
% Actual Target15%
We feel this would be a useful indication of the amount of time available and the amount of time actual spent doing exploratory testing rather than checking (manually or automated)

There are some caveats that go with using this type of measurement.

Within session based test management the tester reports roughly the amount of time they spend:
  • Testing
  • Reporting
  • Environment set-up
  • Data set-up
This is reported as a percentage of the total time in a session, therefore more detailed reporting can be done within a session but we feel this information would be of use at a project level rather than a stakeholder level.  This is something that, if it would be of use to stakeholders we could revisit and come back to.

Your thoughts on this concept would be most welcome and we see this as a starting point for a discussion that hopefully will provide a useful way at a high level to report how much time we are spent testing compared to checking.

We am not saying this will work for everyone but for us it is ideal way of saying to stakeholders that of all the possible time we could have spent testing (exploratory), this is the amount of time we did spend and the associated risks that may have.

Sunday, 29 December 2013

Is Defect Removal Efficiency a Fallacy?

I noticed a tweet by Lisa Crispin in which they had said they had commented on this article http://blog.btdconf.com/?p=43.  I may not agree with everything in the article but it makes some interesting points that I may come back to at a later date, still not seen that comment from Lisa yet... 

However I did notice a comment by Capers Jones which I have reproduced below:

Capers Jones
December 29, 2013 at 10:01 am
This is a good general article but a few other points are significant too: There are several metrics that violate standard economics and distort reality so much I regard them as professional malpractice: 1 cost per defect penalizes quality and the whole urban legend about costs going up more than 100 fold is false. 2 lines of code penalize high level languages and make non coding work invisible The most useful metrics for actually showing software economic results are: 1 function points for normalization 2 activity-based costs using at least 10 activities such as requirements, design, coding, inspections, testing, documentation, quality assurance, change control, deployment, and management. The most useful quality metric is defect removal efficiency (DRE) or the percentage of bugs found before release, measured against user-reported bugs after 90 days. If a development team finds 90 bugs and users report 10 bugs, DRE is 90%. The average is just over 85% but top projects using inspections, static analysis, and formal testing can hit 99%. Agile projects average about 92% DRE. Most forms of testing such as unit test and function test are only about 35% efficient and fine one bug out of three. This is why testing alone is not sufficient to achieve high quality. Some metrics have ISO standards such as function points and are pretty much the same, although there are way too many variants. Other metrics such as story points are not standardized and vary by over 400%. A deeper problem than metrics is the fact that historical data based on design, code, and unit test (DCUT) is only 37% complete. Quality data that only starts with testing is less than 50% complete. Technical debt is only about 17% complete since it leaves out cancelled projects and also the costs of litigation for poor quality.

My problem is the use of this metric, which I feel is useless, to measure the quality of the software, it seems too easy to game.  Once people (or company) are aware that they are being measured their behaviour on a psychological level (intended or not) adjusts so that they look good against what they are being measured against.

So using the example given by Capers

Say a company wants their DRE to look good so they reward their testing teams for finding defects, no matter how trivial and they end up finding 1000, the customers still only find 10.

Using the above example that means this company can record a DRE of 99.0099. WOW -  that looks good for the company.

Now let us say they really really want to game the system and they report 1, 000, 000 defects against the customers 10 - this now starts to become a worthless way to measure the quality of the software. 

It does not take into account the defects that still exist and never found, how long do you wait before you can say your DRE is accurate?  The client finds another 100 six months later, a year later?  The company testing finds another 100 when they have released to client how is this included in the DRE %?

As any tester would ask:
IS THERE A PROBLEM HERE?
This form of measurement does not take into account the technical debt of dealing with this ever growing list of defects.   Measuring using such a metric is flawed by design,  never mind the other activities which would also have hidden costs as mentioned by Caper, that  will quickly spiral.  By the time you have adhered to all of this, given the current market place of rapid delivery (dev ops) your competitors have prototyped,  shown it to the client, adapted to meet what the customer wants and released to the client, implementing changes to the product as the customer desires and not focusing on numbers that quickly become irrelevant.

At the same time I question numbers quoted by Capers such as:
  • Most forms of testing such as unit test and function test are only about 35% efficient and fine one bug out of three. 
  • Other metrics such as story points are not standardized and vary by over 400%
  • Quality data that only starts with testing is less than 50% 
  • Plus others
Where are the sources for all these metrics?  Are they independently verified?

Maybe I am a little more cynical of seeing numbers being quoted especially after reading "The Leprechauns of Software Engineering" by Laurent Bossavit.  Without any attributions for the numbers I do question their value.

To finish this little rant I would like to use a quote from Albert Einstein
"Not everything that can be counted counts, and not everything that counts can be counted."

Thursday, 26 December 2013

Writing A book

I have decided that for the coming year I will be writing a book about software testing and psychology.  This has been on my list of things to do for awhile and I am now putting aside time to actually getting around to doing it.  As such I may not be updating my blog as much as I normally would over the coming year, well until the book has been completed.

If you are interested in have a look at a sample from the the book please visit the following page:  https://leanpub.com/thepsychologyofsoftwaretesting

You can download the first chapter as a sample for FREE.

Thursday, 21 November 2013

Mapping Enough to Test

I have seen on far too many occasions, whilst working in testing, people spending months gathering together information and creating test plans to cover every single requirement, edge case, corner case.  

Some people see this as productive and important work, in the dim and distance past I did too. I have learnt a lot since then and now I personally do not think this is as important as people try to make it out to be.  I see this as a waste of effort and time, time which would be better suited to actually testing the product and finding out what it is doing.  This is not to say that ‘no’ planning is the right approach to take,  rather that the test planning phase may be better suited to defining what you need to do, to start doing some testing.  It is more important to discover things that could block you or even worse prevent you from testing at all.  This article looks at the planning phase of testing from my own personal experiences and viewpoint.

A starting point for this article was after re-reading an article that Michael Bolton  wrote for a previous edition of the Sticky Minds magazine called ‘Testing without a map'.  Within this article Michael talked about using heuristics to help guide your testing effort, at the time of that article he suggested using HICCUPS to act as a guide to your testing and focus on inconsistencies.  This was about useful approaches when actually testing the product rather than on the planning phase.  This article focuses on the before you actually test.

Since the only way to know something is to experience it and by experiencing what the software is doing you are testing it.  My own experiences is that normally there is a delay between what is being developed and having something testers can test (yes even in the world of Agile) this is the ideal time in which we should and can do some test planning.  But what do we include in our plan?  If we follow the standard IEEE standards for test planning  we get the following areas recommended for inclusion in the test plan.

1.      Test Plan identifier – unique number, version, identification when update is needed (for example at x % requirements slip), change history
2.      Introduction (= management summary) – in short what will and will not be tested, references.
3.      Test items (derived from risk analysis and test strategy), including:
a.      Version
b.       Risk level
c.      References to the documentation
d.      Reference incidents reports
e.       Items excluded from testing
4.      Features to be tested (derived from risk analysis and test strategy), including:
a.      Detail the test items
b.      All (combinations of) software features to be tested or not (with reason)
c.      References to design documentation
d.      Non-functional attributes to be tested
5.      Approach (derived from risk analysis and test strategy), including:
a.      Major activities, techniques and tools used (add here a number of paragraphs for items of each risk level)
b.      Level of independence
c.      Metrics to evaluate coverage and progression
d.      Different approach for each risk level
e.      Significant constraints regarding the approach
6.      Item pass/fail criteria (or: Completion criteria), including:
a.      Specify criteria to be used
b.      Example: outstanding defects per priority
c.      Based on standards (ISO9126 part 2 & 3)
d.      Provides unambiguous definition of the expectations
e.      Do not count failures only, keep the relation with the risks
7.      Suspension and resumption (for avoiding wastage), including:
a.      Specify criteria to suspend all or portion of tests (at intake and during testing)
b.      Tasks to be repeated when resuming test
8.      Deliverables (detailed list), including:
a.      Identify all documents -> to be used for schedule
b.      Identify milestones and standards to be used
9.      Testing tasks for preparing the resource requirements and verifying if all deliverables can be produced.
10.   The list of tasks is derived from Approach and Deliverables and includes:
a.      Tasks to prepare and perform tests
b.      Task dependencies
c.      Special skills required
d.      Tasks are grouped by test roles and functions
e.      Test management
f.       Reviews
g.      Test environment control
11.   Environmental needs (derived from points 9 and 10) for specifying the necessary and desired properties of the test environments, including:
a.      Hardware, communication, system software
b.      Level of security for the test facilities
c.      Tools
d.      Any other like office requirements

WOW – if we did all of this when would we ever get time to test?  The problem is that in the past I have been guilty of blindly following this by using test plan templates and lots of cut and paste from other test plans.  

Why?  

This was how we had always done planning and I did not question if it was right or wrong or even useful.  Mind you in the back of my mind I would think why we are doing this, since nobody ever reads it or updates it as things change.  Hindsight is a wonderful thing!

My thoughts and thinking over what we really need to do when planning has changed drastically and now I like to do enough planning to enable a ‘thinking’ tester to do some testing of the product. The problem we face with our craft is that we make excuses to not do what we should be doing, which by the way, is actual testing.  We try to plan in far too much detail and map out all possible scenarios and use cases rather than on what the software is doing.  Continuing on with the theme of ‘The Map’ from the article by Michael Bolton, Alfred Kozbskit  once stated that

“The map is not the territory”

As a reader of this article what does that imply to you?

To me it was an epiphany moment, it was when I realised that we cannot, nor should not, plan what we intend to test in too much detail.   What Alfred was trying to say with this statement was that no matter how much you plan and how detailed your plan is it will never match reality.  In some ways it is like designing a map with a 1:1 scale.  How useful would you find this kind of map to get around?  Would it be of any use?  Would it actually map the reality of the world you can see and observe?  It would not be dynamic so anything that has changed or moved would not be shown.  What about interactive objects within the map?  They are constantly changing and moving and as such by the time you get hold of the map it is normally out of date.  Can you see how that relates to test plans?

What this means in the reality of software testing is that we can plan and plan and plan but that gives no indication on the reality of the testing that we will actually do. After having a discussion with Michael Bolton on Skype he came up with a great concept and said we need to split planning time up into test preparation and actual planning.

You need to spend some time getting ready to test, getting your environments, equipment, and automation in place, without this in place you could be blocked from actually starting to do some testing.  This is vital work and far more important than writing down step by step test scripts.

The purpose of testing is to find out information and the only way to do this is to interact with the application.  It is said that most things are discovered by exploration  and accident than by planning for something and that something happening is more than likely a coincidence.   The problem with doing too much planning is that it becomes out of date be the time you get to the end of your testing.  It is much better to have a dynamic adaptive test plan that changes as you uncovered and find more to test. One of the ways I have adopted this is by the use of mind maps and there have been many articles in the testing community about this subject, I would suggest if you want to know more about this is that you go and Google ‘mind maps and software testing’

The problem we have is that people are stuck in a mentality that test cases are the most important thing that needs to be done when we start to do test planning.  There is a need to move away from test cases towards missions (goals) something that you could do and achieve in a period of time and something that more importantly is reusable and that how it will be used will depend on the context and the person doing the mission.  When planning you only need to plan enough to start testing (as long as your test prep has been done) then when you test you will uncovered interesting information and start to map out what you actually see rather than what you thought you may see.  Your test plan will grow and expand as you become information and knowledge rich in what you find and uncover.

Jutta Eckstein in their article on planning andcontrolling complex projects makes the following statement: 
Accurate forecasts aren't possible because the world is not predictable

So it is wise to not plan too far ahead and plan only enough to do some testing find out what the system is doing and adjust your planning based upon the hard information you uncover.  Report this to those that matter.  The information you find could be what is valuable to the business.  Then look for more to test, you should always have a backlog and the backlog should never be empty.  The way in which I do this is to report regularly what we have found and what new missions we have generated based upon the interesting things we came across. I then re-factor my missions based upon
  • The customer priority – how important is it that we do this mission to the customer
AND
  • The risk to the project - if we did this mission and not one that we have planned to do next from the backlog what risk is this to the project?

Paul Holland discusses this approach in more detail via an article Michael Bolton wrote 

To summarise we need to think more about how much planning we do and think critically if producing endless pages of test cases during test planning is the best use of our resources.  We need to plan enough to do some testing and adapt our test plan based upon the information we uncover.  There is a need to re-evaluate what you intend to do often and adapt the plan as your knowledge of the system increases.  It comes down to stop trying to map everything and map just enough to give you a starting point for discovery and exploration.

*Many thanks to Michael Bolton – for being a sound board and providing some useful insights for this article.

Wednesday, 13 November 2013

Book Review - Explore it! by Elisabeth Hendrickson

The following is a review of the book Explore it” by Elisabeth Hendrickson
Elisabeth Hendrickson website
Having followed Elisabeth on twitter @testobsessed and used her test heuristics cheat sheet extensively  I was very excited when I found out that she was releasing a book about exploratory testing and I was fortunate to be able to receive an early ebook version.  The following is my review of the book and of the things I found interesting and that I hope others may find interesting.
The beginning of the book starts with an explanation of testing and exploration in which she mentions the debate on testing and checking  and to me this gives a good grounding of where Elisabeth sets the context for what follows in the book. I especially like the point she makes regarding the need to interact with the system:
Until you test—interact with the software or system, observe its actual behaviour, and compare that to our expectations—everything you think you know about it is mere speculation.
Elisabeth brings up the point of how it is difficult to plan for everything and suggest we plan just enough.  The rest of the first chapter goes into more details as to what are the essentials of exploratory testing and making use of session based test management.
One part of the book I found useful was the practice sessions at the end of each chapter to help you recap what was being explained within the chapter.  If you are the type to normally skip this kind of thing (like myself) on this occasion I would recommend that you give them a go, it really does help to understand what has been written in the chapter.
The next chapter introduces charters to the reader and for me this is the most useful and important chapter of the book.  It helped me to clarify some parts of the exploratory testing approach that I was struggling with and simplified my thoughts.  Elisabeth explains a rather simple template for creating your own charters.

Explore (target)
With (resources)
To discover (information)·

Where:
  • Target: Where are you exploring
  • Resources: What resources will you bring with you
  • Information: What kind of information are you hoping to find?
The rest of the chapter takes this template and using examples provides the reader with a way in which to create charters simply and in some cases quickly.  Along the way she introduces rules that one may wish to follow to avoid turning the charters in to bad charters. She also offers advice on how to get information for new charters (joining requirement/design meetings, playing the headline game) .
What, you do not know what the headline game is?  Well you need to buy the book to find out.
I have started to use this template to create charters for my own testing going so far as to add this template into the mind map test plans.  This to me was worth paying for the book just for this very useful and simple approach to chartering exploratory testing.
The following chapter takes you on the journey of the importance of being able to observe and notice things.  This is a key element of exploratory testing and looking for more things to test is a part of this.  Elisabeth talks about our biases and how easy it is for us to miss things and provides examples of how we may try and avoid some of them.  She talks about the need for testers to question and question, again to be able to dig deep and uncover information that could be useful. This chapter is useful for being able to uncover the hidden information and it suggest ways in which you can get more information about what you want to explore without the need for requirement documents.  This is important since it is better to have the skills that allow you to be able to ask questions
The next few chapters of the book look at ways in which you change or alter the system to undercover more information by means of exploration.  These chapters take the cheat sheet Elisabeth and others produced and add a lot more detail and practical ways to look at the system with a different perspective.  These chapters include titles such as:
  • Find Interesting Variations
  • Vary Sequence and Interactions
  • Explore Entities and their relationships
  • Discover states and transitions
A great deal of this is found in part two of the book and this section is something I repeatedly return to for quick inspiration of what I can do to explore the system more.  It gives some great techniques on how to find variants in your system and how to model the way the system is working.  It provides useful ways to help you find the gaps in your system or even in your knowledge.
In the middle of the book there is a chapter called ‘evaluate results’ whereby Elisabeth asks if you know the rules of your system.  If you do not then it would be useful to explore and find them.  She explains the meaning of rules using ‘Never and always’.  If you have a rule that saying it always should do this, then explore. The same for ‘never’ you can explore and uncover where these rules are broken.  This chapter also looks at outside factors such as standards, external and internal consistency.  All these are important when exploring the system and Elisabeth within the book reminds us in this chapter to be aware of such things.
The final section of the book is titled ‘putting into context’
In the chapter ‘Explore the ecosystem’ expands upon the ‘evaluate results’ chapter and now asks you to think about external factors such as the OS, 3rd party libraries.  Elisabeth gives a great tip in this chapter on modeling what is within your system and what is external and how they interface.  I have found this extremely useful to work out where I can control the system and where this is outside of my control.  Once this has been done, you can then, as Elisabeth suggests as the ‘What if’ questions of these external systems.  If you want to know more about these What if questions, again, I recommend reading the book.
Within here, Elisabeth gives advice on how to explore systems with no user interfaces.  For someone such as myself where there is very few user interfaces, I found a lot of useful information in this chapter.  Especially for making me think of ways in which I could manipulate the interfaces and explore the APIs.
Next Elisabeth talks about how to go about exploring an existing system and gives some great tips on how to do this such as:
  • Recon Session
  • Sharing observations
  • Interviewing to gather questions
This chapter is useful for those who are, or have tested, an existing system and need new ideas to expand their exploration.
Elisabeth then talks about exploring the requirements which is very useful for those who have requirement documentation and within the chapter there are lots of ways offered in which you can explore them.  One great suggestion in using a test review meeting and turning it into a requirements review.  Elisabeth offers many other suggestions on how to create charters from the requirements and use these during your exploratory testing sessions
The final chapter of the book is to think about exploratory testing throughout the whole of the development of the system and how to make exploratory testing a key part of your test strategy.  The key point I got from this chapter was the following:
When your test strategy includes both checking and exploring and the team acts on the information that testing reveals, the result is incredibly high-quality software
Elisabeth gives some real life experiences and stories of how she went about ensuring ‘exploring’ is a key part of testing.  This chapter is very useful for those who want to introduce exploratory testing and are not sure how to go about doing this.
At the end of the book there is a bonus section on interviewing for exploratory testing skills and some details about the previously mentioned cheat sheet.
This is now my testing ‘go to’ handbook and to me it is as important as my other ‘go to’ testing reference book by Glenford Myers – The Art of software testing
I recommend that all testers should have a copy of Explore It as well as anyone who works with testers.  There is information in this book that can help developers with their unit tests by making them ask ‘have I thought of this’?  It can be used by product owners to put together their own charters which they feel would be important to be investigated or explored.
Would I recommend buying this book?  Heck! YES.

Wednesday, 30 October 2013

A quick way to start doing exploratory testing

Whilst following the tweets from Agile Testing Days using the hashtag #agiletd I came across the following quote made by Sami Söderblom -  http://theadventuresofaspacemonkey.blogspot.co.uk/ during his presentation of 'Flying Under the Radar'

"you should remove 'expected results' and 'steps' from test cases"

Others attending the presentation tweeted similar phrases.
Pascal Dufour @Pascal_Dufour#agiletd @pr0mille remove expected result in testcases. Now you have to investigate to get the expected result.
Anna Royzman @QA_nnaRemove Expected Result from 'test case' - to start thinking @pr0mille #agiletd
Dan Ashby @DanAshby04"you should remove 'expected results' and 'steps' from test cases" - by @pr0mille at #AgileTD - couldn't agree more!!!
Pawel Brodzinski @pawelbrodzinskiRemoving the expected result thus safety net from the equation enables creativity. Ppl won't look for compliancy anymore. @pr0mille #agiletd
This was a WOW moment for myself, I have struggled sometimes to get people to adopt exploratory testing with people struggling to create charters and making them flexible enough.  It may not be an ideal solution but for me it is way in to teams that may be deeply entrenched in test cases and test scripts.

Thanks Sami - another creative idea that I most certainly will put into use.