Sunday, 29 December 2013

Is Defect Removal Efficiency a Fallacy?

I noticed a tweet by Lisa Crispin in which they had said they had commented on this article http://blog.btdconf.com/?p=43.  I may not agree with everything in the article but it makes some interesting points that I may come back to at a later date, still not seen that comment from Lisa yet... 

However I did notice a comment by Capers Jones which I have reproduced below:

Capers Jones
December 29, 2013 at 10:01 am
This is a good general article but a few other points are significant too: There are several metrics that violate standard economics and distort reality so much I regard them as professional malpractice: 1 cost per defect penalizes quality and the whole urban legend about costs going up more than 100 fold is false. 2 lines of code penalize high level languages and make non coding work invisible The most useful metrics for actually showing software economic results are: 1 function points for normalization 2 activity-based costs using at least 10 activities such as requirements, design, coding, inspections, testing, documentation, quality assurance, change control, deployment, and management. The most useful quality metric is defect removal efficiency (DRE) or the percentage of bugs found before release, measured against user-reported bugs after 90 days. If a development team finds 90 bugs and users report 10 bugs, DRE is 90%. The average is just over 85% but top projects using inspections, static analysis, and formal testing can hit 99%. Agile projects average about 92% DRE. Most forms of testing such as unit test and function test are only about 35% efficient and fine one bug out of three. This is why testing alone is not sufficient to achieve high quality. Some metrics have ISO standards such as function points and are pretty much the same, although there are way too many variants. Other metrics such as story points are not standardized and vary by over 400%. A deeper problem than metrics is the fact that historical data based on design, code, and unit test (DCUT) is only 37% complete. Quality data that only starts with testing is less than 50% complete. Technical debt is only about 17% complete since it leaves out cancelled projects and also the costs of litigation for poor quality.

My problem is the use of this metric, which I feel is useless, to measure the quality of the software, it seems too easy to game.  Once people (or company) are aware that they are being measured their behaviour on a psychological level (intended or not) adjusts so that they look good against what they are being measured against.

So using the example given by Capers

Say a company wants their DRE to look good so they reward their testing teams for finding defects, no matter how trivial and they end up finding 1000, the customers still only find 10.

Using the above example that means this company can record a DRE of 99.0099. WOW -  that looks good for the company.

Now let us say they really really want to game the system and they report 1, 000, 000 defects against the customers 10 - this now starts to become a worthless way to measure the quality of the software. 

It does not take into account the defects that still exist and never found, how long do you wait before you can say your DRE is accurate?  The client finds another 100 six months later, a year later?  The company testing finds another 100 when they have released to client how is this included in the DRE %?

As any tester would ask:
IS THERE A PROBLEM HERE?
This form of measurement does not take into account the technical debt of dealing with this ever growing list of defects.   Measuring using such a metric is flawed by design,  never mind the other activities which would also have hidden costs as mentioned by Caper, that  will quickly spiral.  By the time you have adhered to all of this, given the current market place of rapid delivery (dev ops) your competitors have prototyped,  shown it to the client, adapted to meet what the customer wants and released to the client, implementing changes to the product as the customer desires and not focusing on numbers that quickly become irrelevant.

At the same time I question numbers quoted by Capers such as:
  • Most forms of testing such as unit test and function test are only about 35% efficient and fine one bug out of three. 
  • Other metrics such as story points are not standardized and vary by over 400%
  • Quality data that only starts with testing is less than 50% 
  • Plus others
Where are the sources for all these metrics?  Are they independently verified?

Maybe I am a little more cynical of seeing numbers being quoted especially after reading "The Leprechauns of Software Engineering" by Laurent Bossavit.  Without any attributions for the numbers I do question their value.

To finish this little rant I would like to use a quote from Albert Einstein
"Not everything that can be counted counts, and not everything that counts can be counted."

Thursday, 26 December 2013

Writing A book

I have decided that for the coming year I will be writing a book about software testing and psychology.  This has been on my list of things to do for awhile and I am now putting aside time to actually getting around to doing it.  As such I may not be updating my blog as much as I normally would over the coming year, well until the book has been completed.

If you are interested in have a look at a sample from the the book please visit the following page:  https://leanpub.com/thepsychologyofsoftwaretesting

You can download the first chapter as a sample for FREE.

Thursday, 21 November 2013

Mapping Enough to Test

I have seen on far too many occasions, whilst working in testing, people spending months gathering together information and creating test plans to cover every single requirement, edge case, corner case.  

Some people see this as productive and important work, in the dim and distance past I did too. I have learnt a lot since then and now I personally do not think this is as important as people try to make it out to be.  I see this as a waste of effort and time, time which would be better suited to actually testing the product and finding out what it is doing.  This is not to say that ‘no’ planning is the right approach to take,  rather that the test planning phase may be better suited to defining what you need to do, to start doing some testing.  It is more important to discover things that could block you or even worse prevent you from testing at all.  This article looks at the planning phase of testing from my own personal experiences and viewpoint.

A starting point for this article was after re-reading an article that Michael Bolton  wrote for a previous edition of the Sticky Minds magazine called ‘Testing without a map'.  Within this article Michael talked about using heuristics to help guide your testing effort, at the time of that article he suggested using HICCUPS to act as a guide to your testing and focus on inconsistencies.  This was about useful approaches when actually testing the product rather than on the planning phase.  This article focuses on the before you actually test.

Since the only way to know something is to experience it and by experiencing what the software is doing you are testing it.  My own experiences is that normally there is a delay between what is being developed and having something testers can test (yes even in the world of Agile) this is the ideal time in which we should and can do some test planning.  But what do we include in our plan?  If we follow the standard IEEE standards for test planning  we get the following areas recommended for inclusion in the test plan.

1.      Test Plan identifier – unique number, version, identification when update is needed (for example at x % requirements slip), change history
2.      Introduction (= management summary) – in short what will and will not be tested, references.
3.      Test items (derived from risk analysis and test strategy), including:
a.      Version
b.       Risk level
c.      References to the documentation
d.      Reference incidents reports
e.       Items excluded from testing
4.      Features to be tested (derived from risk analysis and test strategy), including:
a.      Detail the test items
b.      All (combinations of) software features to be tested or not (with reason)
c.      References to design documentation
d.      Non-functional attributes to be tested
5.      Approach (derived from risk analysis and test strategy), including:
a.      Major activities, techniques and tools used (add here a number of paragraphs for items of each risk level)
b.      Level of independence
c.      Metrics to evaluate coverage and progression
d.      Different approach for each risk level
e.      Significant constraints regarding the approach
6.      Item pass/fail criteria (or: Completion criteria), including:
a.      Specify criteria to be used
b.      Example: outstanding defects per priority
c.      Based on standards (ISO9126 part 2 & 3)
d.      Provides unambiguous definition of the expectations
e.      Do not count failures only, keep the relation with the risks
7.      Suspension and resumption (for avoiding wastage), including:
a.      Specify criteria to suspend all or portion of tests (at intake and during testing)
b.      Tasks to be repeated when resuming test
8.      Deliverables (detailed list), including:
a.      Identify all documents -> to be used for schedule
b.      Identify milestones and standards to be used
9.      Testing tasks for preparing the resource requirements and verifying if all deliverables can be produced.
10.   The list of tasks is derived from Approach and Deliverables and includes:
a.      Tasks to prepare and perform tests
b.      Task dependencies
c.      Special skills required
d.      Tasks are grouped by test roles and functions
e.      Test management
f.       Reviews
g.      Test environment control
11.   Environmental needs (derived from points 9 and 10) for specifying the necessary and desired properties of the test environments, including:
a.      Hardware, communication, system software
b.      Level of security for the test facilities
c.      Tools
d.      Any other like office requirements

WOW – if we did all of this when would we ever get time to test?  The problem is that in the past I have been guilty of blindly following this by using test plan templates and lots of cut and paste from other test plans.  

Why?  

This was how we had always done planning and I did not question if it was right or wrong or even useful.  Mind you in the back of my mind I would think why we are doing this, since nobody ever reads it or updates it as things change.  Hindsight is a wonderful thing!

My thoughts and thinking over what we really need to do when planning has changed drastically and now I like to do enough planning to enable a ‘thinking’ tester to do some testing of the product. The problem we face with our craft is that we make excuses to not do what we should be doing, which by the way, is actual testing.  We try to plan in far too much detail and map out all possible scenarios and use cases rather than on what the software is doing.  Continuing on with the theme of ‘The Map’ from the article by Michael Bolton, Alfred Kozbskit  once stated that

“The map is not the territory”

As a reader of this article what does that imply to you?

To me it was an epiphany moment, it was when I realised that we cannot, nor should not, plan what we intend to test in too much detail.   What Alfred was trying to say with this statement was that no matter how much you plan and how detailed your plan is it will never match reality.  In some ways it is like designing a map with a 1:1 scale.  How useful would you find this kind of map to get around?  Would it be of any use?  Would it actually map the reality of the world you can see and observe?  It would not be dynamic so anything that has changed or moved would not be shown.  What about interactive objects within the map?  They are constantly changing and moving and as such by the time you get hold of the map it is normally out of date.  Can you see how that relates to test plans?

What this means in the reality of software testing is that we can plan and plan and plan but that gives no indication on the reality of the testing that we will actually do. After having a discussion with Michael Bolton on Skype he came up with a great concept and said we need to split planning time up into test preparation and actual planning.

You need to spend some time getting ready to test, getting your environments, equipment, and automation in place, without this in place you could be blocked from actually starting to do some testing.  This is vital work and far more important than writing down step by step test scripts.

The purpose of testing is to find out information and the only way to do this is to interact with the application.  It is said that most things are discovered by exploration  and accident than by planning for something and that something happening is more than likely a coincidence.   The problem with doing too much planning is that it becomes out of date be the time you get to the end of your testing.  It is much better to have a dynamic adaptive test plan that changes as you uncovered and find more to test. One of the ways I have adopted this is by the use of mind maps and there have been many articles in the testing community about this subject, I would suggest if you want to know more about this is that you go and Google ‘mind maps and software testing’

The problem we have is that people are stuck in a mentality that test cases are the most important thing that needs to be done when we start to do test planning.  There is a need to move away from test cases towards missions (goals) something that you could do and achieve in a period of time and something that more importantly is reusable and that how it will be used will depend on the context and the person doing the mission.  When planning you only need to plan enough to start testing (as long as your test prep has been done) then when you test you will uncovered interesting information and start to map out what you actually see rather than what you thought you may see.  Your test plan will grow and expand as you become information and knowledge rich in what you find and uncover.

Jutta Eckstein in their article on planning andcontrolling complex projects makes the following statement: 
Accurate forecasts aren't possible because the world is not predictable

So it is wise to not plan too far ahead and plan only enough to do some testing find out what the system is doing and adjust your planning based upon the hard information you uncover.  Report this to those that matter.  The information you find could be what is valuable to the business.  Then look for more to test, you should always have a backlog and the backlog should never be empty.  The way in which I do this is to report regularly what we have found and what new missions we have generated based upon the interesting things we came across. I then re-factor my missions based upon
  • The customer priority – how important is it that we do this mission to the customer
AND
  • The risk to the project - if we did this mission and not one that we have planned to do next from the backlog what risk is this to the project?

Paul Holland discusses this approach in more detail via an article Michael Bolton wrote 

To summarise we need to think more about how much planning we do and think critically if producing endless pages of test cases during test planning is the best use of our resources.  We need to plan enough to do some testing and adapt our test plan based upon the information we uncover.  There is a need to re-evaluate what you intend to do often and adapt the plan as your knowledge of the system increases.  It comes down to stop trying to map everything and map just enough to give you a starting point for discovery and exploration.

*Many thanks to Michael Bolton – for being a sound board and providing some useful insights for this article.

Wednesday, 13 November 2013

Book Review - Explore it! by Elisabeth Hendrickson

The following is a review of the book Explore it” by Elisabeth Hendrickson
Elisabeth Hendrickson website
Having followed Elisabeth on twitter @testobsessed and used her test heuristics cheat sheet extensively  I was very excited when I found out that she was releasing a book about exploratory testing and I was fortunate to be able to receive an early ebook version.  The following is my review of the book and of the things I found interesting and that I hope others may find interesting.
The beginning of the book starts with an explanation of testing and exploration in which she mentions the debate on testing and checking  and to me this gives a good grounding of where Elisabeth sets the context for what follows in the book. I especially like the point she makes regarding the need to interact with the system:
Until you test—interact with the software or system, observe its actual behaviour, and compare that to our expectations—everything you think you know about it is mere speculation.
Elisabeth brings up the point of how it is difficult to plan for everything and suggest we plan just enough.  The rest of the first chapter goes into more details as to what are the essentials of exploratory testing and making use of session based test management.
One part of the book I found useful was the practice sessions at the end of each chapter to help you recap what was being explained within the chapter.  If you are the type to normally skip this kind of thing (like myself) on this occasion I would recommend that you give them a go, it really does help to understand what has been written in the chapter.
The next chapter introduces charters to the reader and for me this is the most useful and important chapter of the book.  It helped me to clarify some parts of the exploratory testing approach that I was struggling with and simplified my thoughts.  Elisabeth explains a rather simple template for creating your own charters.

Explore (target)
With (resources)
To discover (information)·

Where:
  • Target: Where are you exploring
  • Resources: What resources will you bring with you
  • Information: What kind of information are you hoping to find?
The rest of the chapter takes this template and using examples provides the reader with a way in which to create charters simply and in some cases quickly.  Along the way she introduces rules that one may wish to follow to avoid turning the charters in to bad charters. She also offers advice on how to get information for new charters (joining requirement/design meetings, playing the headline game) .
What, you do not know what the headline game is?  Well you need to buy the book to find out.
I have started to use this template to create charters for my own testing going so far as to add this template into the mind map test plans.  This to me was worth paying for the book just for this very useful and simple approach to chartering exploratory testing.
The following chapter takes you on the journey of the importance of being able to observe and notice things.  This is a key element of exploratory testing and looking for more things to test is a part of this.  Elisabeth talks about our biases and how easy it is for us to miss things and provides examples of how we may try and avoid some of them.  She talks about the need for testers to question and question, again to be able to dig deep and uncover information that could be useful. This chapter is useful for being able to uncover the hidden information and it suggest ways in which you can get more information about what you want to explore without the need for requirement documents.  This is important since it is better to have the skills that allow you to be able to ask questions
The next few chapters of the book look at ways in which you change or alter the system to undercover more information by means of exploration.  These chapters take the cheat sheet Elisabeth and others produced and add a lot more detail and practical ways to look at the system with a different perspective.  These chapters include titles such as:
  • Find Interesting Variations
  • Vary Sequence and Interactions
  • Explore Entities and their relationships
  • Discover states and transitions
A great deal of this is found in part two of the book and this section is something I repeatedly return to for quick inspiration of what I can do to explore the system more.  It gives some great techniques on how to find variants in your system and how to model the way the system is working.  It provides useful ways to help you find the gaps in your system or even in your knowledge.
In the middle of the book there is a chapter called ‘evaluate results’ whereby Elisabeth asks if you know the rules of your system.  If you do not then it would be useful to explore and find them.  She explains the meaning of rules using ‘Never and always’.  If you have a rule that saying it always should do this, then explore. The same for ‘never’ you can explore and uncover where these rules are broken.  This chapter also looks at outside factors such as standards, external and internal consistency.  All these are important when exploring the system and Elisabeth within the book reminds us in this chapter to be aware of such things.
The final section of the book is titled ‘putting into context’
In the chapter ‘Explore the ecosystem’ expands upon the ‘evaluate results’ chapter and now asks you to think about external factors such as the OS, 3rd party libraries.  Elisabeth gives a great tip in this chapter on modeling what is within your system and what is external and how they interface.  I have found this extremely useful to work out where I can control the system and where this is outside of my control.  Once this has been done, you can then, as Elisabeth suggests as the ‘What if’ questions of these external systems.  If you want to know more about these What if questions, again, I recommend reading the book.
Within here, Elisabeth gives advice on how to explore systems with no user interfaces.  For someone such as myself where there is very few user interfaces, I found a lot of useful information in this chapter.  Especially for making me think of ways in which I could manipulate the interfaces and explore the APIs.
Next Elisabeth talks about how to go about exploring an existing system and gives some great tips on how to do this such as:
  • Recon Session
  • Sharing observations
  • Interviewing to gather questions
This chapter is useful for those who are, or have tested, an existing system and need new ideas to expand their exploration.
Elisabeth then talks about exploring the requirements which is very useful for those who have requirement documentation and within the chapter there are lots of ways offered in which you can explore them.  One great suggestion in using a test review meeting and turning it into a requirements review.  Elisabeth offers many other suggestions on how to create charters from the requirements and use these during your exploratory testing sessions
The final chapter of the book is to think about exploratory testing throughout the whole of the development of the system and how to make exploratory testing a key part of your test strategy.  The key point I got from this chapter was the following:
When your test strategy includes both checking and exploring and the team acts on the information that testing reveals, the result is incredibly high-quality software
Elisabeth gives some real life experiences and stories of how she went about ensuring ‘exploring’ is a key part of testing.  This chapter is very useful for those who want to introduce exploratory testing and are not sure how to go about doing this.
At the end of the book there is a bonus section on interviewing for exploratory testing skills and some details about the previously mentioned cheat sheet.
This is now my testing ‘go to’ handbook and to me it is as important as my other ‘go to’ testing reference book by Glenford Myers – The Art of software testing
I recommend that all testers should have a copy of Explore It as well as anyone who works with testers.  There is information in this book that can help developers with their unit tests by making them ask ‘have I thought of this’?  It can be used by product owners to put together their own charters which they feel would be important to be investigated or explored.
Would I recommend buying this book?  Heck! YES.

Wednesday, 30 October 2013

A quick way to start doing exploratory testing

Whilst following the tweets from Agile Testing Days using the hashtag #agiletd I came across the following quote made by Sami Söderblom -  http://theadventuresofaspacemonkey.blogspot.co.uk/ during his presentation of 'Flying Under the Radar'

"you should remove 'expected results' and 'steps' from test cases"

Others attending the presentation tweeted similar phrases.
Pascal Dufour @Pascal_Dufour#agiletd @pr0mille remove expected result in testcases. Now you have to investigate to get the expected result.
Anna Royzman @QA_nnaRemove Expected Result from 'test case' - to start thinking @pr0mille #agiletd
Dan Ashby @DanAshby04"you should remove 'expected results' and 'steps' from test cases" - by @pr0mille at #AgileTD - couldn't agree more!!!
Pawel Brodzinski @pawelbrodzinskiRemoving the expected result thus safety net from the equation enables creativity. Ppl won't look for compliancy anymore. @pr0mille #agiletd
This was a WOW moment for myself, I have struggled sometimes to get people to adopt exploratory testing with people struggling to create charters and making them flexible enough.  It may not be an ideal solution but for me it is way in to teams that may be deeply entrenched in test cases and test scripts.

Thanks Sami - another creative idea that I most certainly will put into use.

Wednesday, 16 October 2013

Blog Action Day - Human Rights

Today is Blog Action Day and the theme for this year is Human Rights.

More details of this great cause can be found here: http://blogactionday.org/#

If you want to support this event and have a twitter account then use the hashtag #BAD2013 #humanrights twitter handle @blogactionday.  Please tweet to give awareness to this cause.

I came across the following Poster produced by Zen Pencils and thought it was such a fantastic example of the meaning of human rights.  I hope he does not mind but I have used the image on my blog here since it far surpasses anything I could ever have come up with.  Please visit his site and support him as an artist.

(C) http://zenpencils.com/comic/134-the-universal-declaration-of-human-rights/
Please click on the link to see a high resolution version

Further reading on human rights and how you can be involved
Human Rights Day 10th December 2013


Tuesday, 15 October 2013

Are you ‘Checking’ or ‘Testing’ (Exploratory) Today?

Do you ask yourself this question before you carry out any test execution?

If not then this article is for you.  It starts with a brief introduction about the meaning of checking and testing in the context of exploration and then asks the reader to think about the question then evaluate the testing they are doing and determine from their answer if what they are doing has the most value.

There have been many discussions within the testing community about what the difference is between ‘checking’ and ‘testing’ and how it fits within the practice of test execution.

Michael Bolton started the debate with his article from 2009 on ‘testing vs checking’ in which he defined them as such:

  • Checking Is Confirmation
  • Testing Is Exploration and Learning
  • Checks Are Machine-Decidable; Tests Require Sapience

At that time there were some fairly strong debates on this subject and in the main I tended to agree with the distinctions Michael made between ‘checking’ and ‘testing’  and used this in my approach when testing.

James Bach, working with Michael, then came along with another article to refine the definitions in his article ‘Testing and Checking refined.

  • Testing is the process of evaluating a product by learning about it through experimentation, which includes to some degree: questioning, study, modelling, observation and inference.
  • (A test is an instance of testing.)
  • Checking is the process of making evaluations by applying algorithmic decision rules to specific observations of a product.
  • (A check is an instance of checking.)

From this they stated that there are three types of checking

  • Human checking is an attempted checking process wherein humans collect the observations and apply the rules without the mediation of tools.
  • Machine checking is a checking process wherein tools collect the observations and apply the rules without the mediation of humans.
  • Human/machine checking is an attempted checking process wherein both humans and tools interact to collect the observations and apply the rules.

The conclusion to me appeared to be that checking is a part of testing but we need to work out which would be best to use for the checking part is a machine better or a human?  This question leads to the reason for putting this article together.

James on his website produced a picture to aid visualisation of the concept:


© - James Bach - Testing and Checking refined

Since in my world checking forms a part of testing as I have interpreted the article by James we therefore need to pause for a moment and think about what we are really doing before performing any testing.

We need to ask ourselves this question:

Are we ‘checking’ or ‘testing’ (exploratory) today?

Since both form a part of testing and both could, depending on the context, be of value and importance to the project it is vital that we understand what type of testing we are doing.  If we ask ourselves that question and our answer is ‘checking’ we now need to find out what type of checking we are doing.  If the checking that is being performed falls under the category of machine checking then we need to think and find out why we are doing this as a human thinking person rather than getting a machine to perform it.  Things that could fall under this category could be validation or verification of requirements or functions in which we know what the answer will be.  If this is the case then you need to ask

WHY:

  • are you doing this?
  • is a machine not doing this?
  • can a machine not do this for me?

The problem with a person carrying out manual machine checking is that is uses valuable testing time that could be used to discover or uncover information that we do not know or expect. * When people working in software development talk about test automation this is what I feel they are talking about.  As rightly stated by James and Michael there are many other automation tools that testers can use to aid their exploratory testing or human checking which could be classified as test automation.

So even though checking is a part of testing and can be useful, it may not be the best use of your time as a tester.  Sometimes there are various reasons why testers carry out machine checking such as:

  • Too expensive
  • Too difficult to automate
  • No time
  • Lack of skills
  • Lack of people
  • Lack of expertise. 

However if all you are doing during test execution is machine checking then what useful information are you missing out on finding? 

If we go back to the title of this article are you ‘checking’ or ‘testing’ today?

You need to make sure you ask yourself this question each time you test and evaluate which you feel would have most value to the project at that time.  We cannot continue to have the same ‘must run every test check manually’ mentality since this only addresses the stuff we know and takes no account of risk or priority of the information we have yet to discover.

The information that may be important or useful is hidden in the things we do not expect or currently do not know about. To find this we must explore the system and look for the interesting details that are yielded from this type of testing. (Exploratory)

I will leave you with the following for when you are next carrying out machine checking:

..without the ability to use context and expectations to “go beyond the information given,” we would be unintelligent in the same way that computers with superior compututional capacity are unintelligent.
J. S. Bruner (1957) Going beyond the information given.


Further reading






Friday, 11 October 2013

Believing in the Requirements

Traditionally in testing there has been a large amount of emphasis placed upon ‘testing’ ‘checking’ the requirements.  An article by Paul Holland on Functional specification blinders  and my currently reading of Thomas Gilovich excellent book on How we know what isn’t so has made me re-think this strategy from a psychological perspective. I feel Paul was on the right track with his suggestions of not using the requirements/specification to guide your creative test idea generation but looking at alternatives.  However even these alternatives could cause limitations in your thinking and creative ideas due to the way we think.
The problem we have is that once we have been presented with any information our inbuilt beliefs start to play their part and look at any information with a bias slant.  We at built to look for confirmations that match our beliefs in other words we look for things we want to believe in.  So if believe the implementation is poor or the system under test has been badly designed we will look for things that confirm this and provide evidence that what we believe is true.  We get a ‘buzz’ when we get a ‘yes’ that matches our beliefs.  The same could apply when looking through the requirements we start to find things that matches our beliefs and at the same time the requirements (especially if ambiguous) start to influence our beliefs so that we, as Paul discovered, only look for confirmations of what is being said.  Once we have enough information to satisfy our beliefs we then stop and feel that we have done enough.
The other side of this is that any information that goes against our beliefs makes us dig deeper and look for ways to discount the evidence that is against what we believe.  When faced with evidence that is against what we believe we want to find ways to discount this information and find flaws in it.  The issue is that if we are looking at requirements or specification then normally there is not much that goes against our initial beliefs due to the historic influence that these documents can have.  So we normally do not get to the stage of digging deeper into the meaning of these documents.
As Thomas Gilovich stated
People’s preferences influence not only the kind of information they consider, but also the amount they examine.
If we find enough evidence to support our views then normally we are satisfied and stop.  This limits our scope for testing and being creative. My thoughts on how to get around this apart from following the advice Paul gives is one of being self-critical and questioning oneself.
When we are in a confirming our beliefs mode we are internally asking ourselves the following question
 “Can I believe this?”
Alternatively when we find information that does not match or confirm our beliefs we internally ask ourselves the following question
“Must I believe this?”
These questions are taken from the book by Thomas Gilovich referenced earlier and in this Gilovich states
The evidence required for affirmative answers to these two questions are enormously different.
Gilovich mentions that this is a type of internally framing we do at a psychological level, after reading this it reminded me to go back and read the article by Michael Bolton on Test Framing in which I attended a tutorial at the Eurostar Test Conference . I noted within the article by Michael that there appeared, IMO, a lot of proving the persons beliefs rather than disproving.  In other words many of the examples were answering the “Can I believe this” question.  This is not wrong and is a vital part of testing and I use the methods described by Michael a great deal in my day to day work.  I wonder if this topic could be expanded a little by looking at the opposite and trying to disprove your beliefs, in other words asking the “Must I believe this?” questions.
So moving forward I believe that we can utilize our biases here to our advantage to become more creative in our test ideas.  To do this we need to look at ways to go against what we belief is right and think more negatively.  The next time you look at a requirements or specification document ask yourself the following:
“MUST I BELIEVE THIS”
And see where this leads you.

PS – this article is a double edged sword – if you read this article you should now be asking “Must I believe this?”