Thursday 27 June 2013

Why we need to explore

This article looks at why within testing we need to carry out exploratory testing.

It is based upon the following diagram that James Lyndsay  drew during a recent diversity workshop.

When we talk about expectations we mean something that is written upfront , or known about before delivery of the product.  For example requirements or specification documentation.  There are other types of expectation that depend on the person such as experiences, domain knowledge or business knowledge.

The deliverable is the product being delivered and is normally the software under test.

When the product is delivered sometimes the expectations are met by deliverable and this is the area shown on the diagram where the ovals overlap and other times they are not.  Where they do overlap we can say that the deliverable meets the expectations in this area.  This could be seen as what we would with the testing world call verifying requirements, user stories, stuff/information that is known or that we know about.

**Note the area where expectation meets deliverable is not to scale or representative of a real world situation it is for clarification purposes only

The expectations to the left of the overlap are where our expectations are not met by the deliverable and these could be defined as bugs, defects, issues or a misunderstanding of what was expected.

The stuff to the right of the overlap is stuff in the deliverable that you did not expect and is the stuff you need to use exploratory testing to discover what could be vital and important information to someone who matters.  This is the stuff you do not know about, nor could you have known about until you started to do some testing.

Outside the ovals is an area in which you did not have any expectations nor is in the deliverable and as such is not important with regards to testing for the purpose of this article.

The following diagram is a visual representation of what I wrote above.


With this as a model we can now start to think about what James Bach and Michael Bolton discussed with reference to the terms checking and testing.  I see that for the stuff we know about (the expectations) we should look at covering this with machine checking (automation). This does not mean we should automate ALL expectations we need to base our automation on many factors cost, priority, risk, etc.  (I plan a future article on not doing too much automation). If we do this we should have some confidence that our deliverable meets our expectations.  This then allows testers to start to do some 'testing' and uncover the useful information that exists in the deliverable that no one knows about until some testing is carried out.



This is why IMO the only way we can test is not by following scripts (even though they can be useful to aid some elements of meeting our expectations) but by exploring the deliverable and finding out what it is actually doing that we did not expect it to do.

My next article will extend on this model and revisit measuring test coverage.




11 comments:

  1. Things are even more complicated than this model suggests.

    There are desires and there are expectations. Both desires and expectations may be explicit or tacit. Explicit desires may be represented accurately and not; some will some won't. They will be represented several times, often in long chains of communication, which will succeed and fail to some degree. Failures will be reinterpreted and repaired. In any case desires and expectations will always be represented incompletely. All of these things involve different people in different relationships, and all those things evolve over time in ways that are unpredictable. The deliverable is not only the product, but also its relationship to the products and systems and people with which it interacts. Whatever confidence we have that the deliverable meets our expectations is on very shaky ground, considering how little we can know about our expectations.

    Even after all this, only a fraction of the explicit, multi-person, time-bound expectations or desires are machine-decidable, so saying "this should be automated" seriously misses the mark. And even then automating the fragment that we can automate in finite time is an exploratory process, not a scripted process.

    This is not to say that your conclusion is wrong; on the contrary. Exploration is not an activity unto itself; it's part of what happens every step of the way. Peel the onion and the whole process is suffused with exploration, discovery, investigation, and learning.

    ---Michael B.

    ReplyDelete
  2. Many thanks Michael for your great comments (as always) and there is nothing I do not disagree with in what you write.

    On reflection maybe I should have made it clear my target audience for this article. I have tried to keep the model simple as to not make it over complicated when starting a discussion with people on why we need to do testing. This is aimed at people outside the testing arena but who can be significantly influential in business decision making which could impact testing.

    With regards to the 'should be automated' remark missing the mark, I see this as a starting point to aid a discussion. We constantly see remarks made within our craft that we should automate everything (as per the comment on your blog recently - http://www.developsense.com/blog/2013/02/manual-and-automated-testing/comment-page-1/#comment-13100) and that manual testing is dead. This remark I hope we go down the line of ... of course we cannot automate everything... so what to you at this time would be useful things that we could automate that would allow testers to test.

    I hope this clarifies a little of my intention for this post.

    ReplyDelete
  3. I'm not quite sure I fully understand. A tester following a script should still make observations. My experience (script or not) is if a tester sees a feature or behavior that wasn't communicated anywhere, they would ask the questions, "Why are you here? What do you do? Who gets billed because you are here?"

    Testers who follow scripts should still make observations. I guess you could make an argument that a tester following a script may develop tunnel vision, but I would wager that same tester performing that same test is following ‘something’ whether it’s in their head or written down.

    I think you have some great points that most, if not all, testers can benefit from. For me, the ET vs. script argument is a distraction from the meat of what you’re saying. There are expectations, there are written interpretations of those expectations, there are developed (coded) interpretations of those expectations, and there are tested interpretations of those expectations. I guess I’ve never really felt like a script detracted me from doing my job which is reconciling those interpretations to some version of the ‘truth’.

    That said, I did like the post as I have most of your posts. Thank you for sharing your experiences.

    ReplyDelete
    Replies
    1. Many thanks for your comment.

      I was not aware that this post came across as a script vs ET argument, since I did state that scripts are useful aids, if that is the case then I may have got the message delivery wrong.

      I think it comes down to what you said that testers 'should' make observations. I have a view that for most things in which if you can script it you could automate it. IME I have seen many testers blindly follow test steps and carry out a ticking boxes exercise (I am guilty of that in my past) and only check it meets the written expectations. I agree testers can do more when following a script and they should do so. This is like going off the beaten track and hence becomes an exploratory session rather than a scripted one. So I could be a question of semantics in usage of words but I feel we are on the same track of thought.

      Delete
    2. @Anonymous...

      I disagree. First, we say that the tester is following the script, but that can only be true to some degree. In my classes, one of my exercises includes a seven-step installation script. If followed the script exactly the way I wrote it no one in the class would have any trouble with it, and they would all finish at almost exactly the same time. Yet they never do. That's because it's impossible for someone to read it exactly the way I wrote it, because human communication is always interpreted; it is never simply followed. My script doesn't tell them what to do; it affords interpretations of what to do. I'd recommend reading Harry Collins (Tacit and Explicit Knowledge and The Shape of Actions) and Marshall McLuhan (or perhaps a more digestable summary here http://www.developsense.com/blog/2007/06/mcluhan-thinking-for-testers/, here http://www.developsense.com/articles/2007-09-McLuhanForTesters.pdf or here http://individual.utoronto.ca/markfederman/CultureOfInnovation.pdf.

      Second, even to the limited degree to which the tester is following the script, she is not following something else. She is applying heuristics. The semantic distinction is important here. Heuristics are not followed; they are applied, and their success is affected by factors that include the tester's judgement and skill, and the context in which the heuristics are applied.

      The exploratory vs. script is not a distraction from what John is saying. Moreover, it's at the very core of what you're saying.

      Cheers,

      ---Michael B.

      Delete
  4. At best, executing a script is one person interpreting another person's interpretation of another person's verbal communication of their intended requirement. The only way for a written script to be fully scripted as an approach would be to remove the power of observation and interpretation from the person executing their interpretation and give them a direct line into the mind of the user. Since there are always interpretations, I do not see how there can be a truly scripted approach. At any point you introduce communication (verbal or written) the process becomes fallible. That brings us back to a written script. A tester working with a written script still applies the same power of interpretation and observation. A tester with a checklist, a tester with a script, or a tester with only their brain still has to reconcile the differences between all of these interpretations, although I concede that person A writing a script for person B adds an additional layer.

    I guess I just don’t understand the difference between someone using a script vs. someone using something else. The problem of reconciling the problems caused by interpretations and communication transcends these models.

    ReplyDelete
    Replies
    1. ISQTB define a test script as a test procedure - A document specifying a sequence of actions for the execution of a test. Also known as test script or
      manual test script. [After IEEE 829].

      So by doing this you should be following a set of instructions which will have expected outcomes. Therefore a script has fairly rigid and inflexible steps that you MUST follow (in accordance with ISQTB). Anything else you do that is not within the steps or off the defined steps that you should be following is no longer scripted and in some orgs will indicate that you have not followed the instructions exactly. How would you be able to say that the test script passed or failed if you did not follow the steps precisely?

      Exploratory testing gives you a freedom to explore and examine the system by means of experimentation using oracles and heuristics to guide your testing effort. It has structure by means charters and missions to achieve during your testing session but it a way to encourage the stuff you say every tester should be doing (observations, noticing) by removing barriers and enforcing a inflexible non natural way to discover information.

      I recommend you look at the material for Session Based Test management http://www.satisfice.com/sbtm/ and more into what the difference is between script vs something else. The BBST course material is a good start and is FREE - http://www.testingeducation.org/BBST/. I would highly recommend attending a Rapid Software Testing course if it is possible to attend one. It may give you some useful insight into how you can address the reconciling problems you appear to struggling with.

      Delete
  5. "How would you be able to say that the test script passed or failed if you did not follow the steps precisely?"

    I think that's the part that causes me problems. I do not consider a pass/fail as part of (nor a requirement for) a script. I also would not say an expected result is necessarily exclusive to a script-based approach. A pass/fail condition requires some sort of analysis. We often use scripts where we do not (or at the time of creation, cannot) pre-determine an expected result.

    At best a script is one person’s interpretation and no matter how precise, there is always room for a misunderstanding as with all forms of communication. If person A writes a script with an expected result, person B interprets that and should perform a static analysis of that script against (presumably) some requirement or model.

    Perhaps this my misunderstanding, but when speaking of an exploratory approach, shouldn’t we be careful about pairing that with charters and missions? Isn’t it possible for a junior tester to apply someone else’s heuristics in a scripted manner (example: I don’t necessarily understand why I’m doing this, but so-and-so used this so I will)? I was under the impression an exploratory approach was based on the intent of the testing, using knowledge gained to improve and make better decisions regarding testing on a particular project. Perhaps it is worth mentioning I am not ISTQB certified, so perhaps we’re coming from a different definition of script. Language is a funny thing.

    ReplyDelete
  6. Anonymous,

    The following is a useful article from Jon Bach describing a "tester freedom scale" that I find useful when people start to talk about "scripted" vs. "exploratory." It's main message is consistent with a theme expressed above. These terms are part of a continuum.

    http://www.quardev.com/blog/a_case_against_test_cases

    ReplyDelete
    Replies
    1. Many thanks fro your comment Justin I had forgotten all about article from Jon.

      regards

      John

      Delete
    2. Thanks for that reference and I apologize for a delayed response. We just moved into a new home and I am just now finding the time to catch up on previous discussions. I like this article by Jon and I think it more eloquently summarizes the concepts I’ve been struggling with and trying to communicate.
      “It's because I have not yet seen evidence that a scripted test case can be run by a human the same way every single time. On the flip side, it's also because I believe exploratory testing can include scripts. Because this line is blurred for me, I don't know what it is I'd be comparing in a test-off between scripted and exploratory.
      If "scripted" means "written down in advance," that could mean that when I'm exploring, (which many think of as "unscripted"), I am doing scripted testing when I use a script or a procedure to frame my exploration. Rightly so, I can have a model of the test or the feature, a charter, a mission statement, a user role I'm supposed to play - yet still be what Cem Kaner calls "brain engaged": alert and attune to emerging context.”
      That is to separate the framing of the test from the execution. For example, the presence of a script (and possibly an expected result) doesn’t necessarily mean the person testing is using a scripted approach no more than the presence of a checklist or charter implies the tester is using an exploratory approach. Excellent reference, Justin. Thank you for sharing. I’ll add that to my favorites list.

      Delete