Friday, 23 November 2012

Emergent Strategy


How often have you been asked why do we do exploratory testing rather than planned and predicted scripts.  Recently I have been reading some material on corporate planning strategies and how some become successful and other do not and looking at how this links into software development especially from a testing perspective. 

Given that software can be very dynamic and react in some unpredictable ways no matter how much planning we do.  It surprises us and more importantly it surprises the person who created it.  This goes against the commonly held notation that software is predictable since we planned with great care and detail what was going to be coded.  The problem comes is that we are human and may not act in a rational way and this is reflected in our creations.  

So what is the best way to do software testing?  

The purpose of testing is to learn things about it and the best way of doing this is to experience it and learn by do it.  This is best summed up by Nassim Nicholas Taleb

 We are better at doing than learning. Our capacity for knowledge is vastly inferior to our capacity for doing things – our ability to tinker, play, discover by accident.

Within the corporate strategy world I have discovered that this appears  to have a  name  'emergent strategy'  (or realized strategy).  Looking more into this I found the following link http://planningskills.com/glossary/154.php

One of the most interesting part of the above link to me that I noticed was the following sentence:

Emergent strategy implies that an organization is learning what works in practice

Is this not similar to what we attempt to do when doing exploratory testing?  We try to learn about the product and what is working or not working by experiencing  it?

An interesting point made in the article is the following statement

Mixing the deliberate and the emergent strategies in some way will help the organization to control its course while encouraging the learning process.

This appears to link back to an  article I wrote about  hybrid testing and having a mixture of scripted checks and  exploratory tests.

So to go back to my first statement about the purpose of exploratory testing.  To me since we cannot predict everything that the software will do the only way to understand what it will do and to learn is to explore it and that is the purpose of exploratory testing.  To uncover information that may prove to be of value to someone who matters.

 I may need to look more into emergent strategy a little more.

Wednesday, 14 November 2012

Place your bets


Having recently completed reading the excellent book “The Click Moment” by  Franz Johansson (@Frans_Johansson).  I was amazed by how much of the material written in the book appears to relate to software testing.  The principles of the book are about creating opportunities in an unpredictable world and not about putting in hours and hours of practice.  The author explains that if there are fixed rules and these rules do not change too much then the 10000 hours rule of practice works  .  However the author points out we are living in a world in which the rules are always changing and unexpected (random) things can and do happen.

Some of the statements made in the book appear to have a correlation to the current state of software testing and the various “schools”.  (The scripted vs the exploratory debate).  The first thing that caused me to think was some of the comments on planning  and how this stifles opportunities for random events and for uncovering new and exciting things.  

For example Franz stated the following

…In fact, it might mean that the plan is outdated before you even start to execute it….”

I have often experienced this within companies that believe that we can plan upfront and know all we need to know to write scripts before we actually use the product.  I have seen test plans which when I start doing some testing are hopelessly out of date and then spend unnecessary time trying to retrofit what I am experiencing when testing with what the plan is saying.  Doing this makes me take my eye of the ball and miss chances to find out what could be important information.

Franz then makes a statement which could be taken directly from why we need to do exploratory testing.

“..As ironic as it may sound, it actually pays to schedule time to do something unscripted and unplanned. We need to leave enough room in our day to explore things that are not connected to our immediate goals. We need to free ourselves up to become aware of hidden opportunities and expose ourselves to significant click moments. Leave some flexibility in your schedule. Then, make sure you use the flexibility to explore something unrelated to what you are doing or to follow up on a curious idea you have been considering”

This offers so much potential for uncovering new and valuable information without the restrictions of following someone else’s thinking.  This way of testing in my world can lead to many serendipity moments.

So how can we help to make this happen in software testing?  Is there anything we can do to help create more of these moments of randomness?

Franz within the book gives 5 great tips which may encourage more serendipity.  I have listed the tips below and give a description of how this could apply to exploratory testing.

1. Place Many Bets 

Having a single exploratory testing mission which can consist of an infinite number of tests (bets) is surely much better than having a single scripted test in which you are only placing one bet.

2. Minimize the Size of the Bets

Instead of spending lots of time creating a test script based upon assumptions do the minimum required to do some actual testing and time box your testing sessions.

3. Take the Smallest Executable Step

Do the minimal amount of planning you can do to enable you to do some exploratory testing.  We need to stop thinking if we write detailed test scripts and plans before we really know anything that this will lead to us uncovering lots of information about the system.

4. Calculate Affordable Loss, Not ROI

We still believe that there is a justifiable, measurable cost in planning ahead and creating detailed test plans and scripts.  Which we then discover are outdated and very costly to maintain but we insist they are useful because someone else may use them in the future.  Instead look at creating lots of test ideals using test models and heuristics  which are cheap to create and if of no use can easily be discarded once we uncover more information when testing.  We should be looking at testing and its cost effectiveness from what can we afford to throw away if our assumptions are wrong.

5. Use Passion as Fuel

This is so important people with a passion for what they are doing are the drivers of opportunities.  This type of person is one where if they get stuck or falter they pick themselves back up, dust themselves off and look for ways around the problems.   These are innovators, the people who can radically change the market and improve what is already there.  There is a need to employ more of these passionate types of people in the world of software testing.  I am getting fed up of the 9-5 testers, the ones who have no desire to learn or improve themselves, the ones not reading this blog.

I do recommend that anyone involved in software development read this book it gives some great advice of how we can adapt what we do and how we think to improve our chances of delivering successful projects. 

PS No I am not being paid by Franz for writing this 







Tuesday, 13 November 2012

Testers should learn code


At the Eurostar conference in Amsterdam  Simon Stewart (@shs96c) presented a keynote on Selenium over the years

During the key note Simon made the following comment

"If you are testing the web you absolutely need to be able to code"

Now I am sure that out of context this could be taken in many ways and Rob Lambert has produced a very good discussion on this same subject here which takes on both sides.

I will add is that Simon did follow this up with the following line.

"If not become a specialist so you can add value"

This led to some interesting exchanges on Twitter (search for #esconfs) in which people came down on either side. I had concerns that people would only hear the first bit and this could cause barriers to some great people being able to be involved in testing just because they have no interest in coding or even wanting to learn to code.

IMO I am not sure about this statement and it caused much debate after the key note.  If you have an interest in learning code then do so otherwise do something that can add value. The discussions continued during lunch and the rest of this article is my own thoughts on this subject.

After talking to Simon afterwards it appears his message had got taken the wrong way.  He said it is helpful to code and that if all you do is test (check) scripts then you may not have a job.

My concern is forcing people to code if they have no interest could be a block from great people wanting to enter the world of software testing.

Dot Graham stated the following "Lose a good tester but gain a poor programmer".

I am not convinced that everyone needs to code, it can have its advantages but there is another perspective. If you do not understand the code you may test in a different non-confirmatory way.  You may be able to ask the difficult questions of why did you do it this way and made it complex?  You may not have a bias built up from your coding experiences and knowledge.  I think for some it can be useful but insisting on it is a very dangerous path to follow.

Some of the discussions that followed went along the lines that if testers refused to learn we should not employ them.  This is where I had a WTF moment.....  I have not said anything about testers not wishing to learn what I was saying was some people may not have an interest or a knack for coding or find it impossible for whatever reason to grasp.  However they make one hell of a tester and show a great thirst for learning new ways to exercise the software that are novel, unique and valued.  This was the second point that Simon was making and sadly appeared to have been missed.  As long as you can find ways to add value then you can be a tester.

I am afraid that a statement of this sort can be used as a filter to prevent people entering this great world of software testing.  There are many other things that IMO testers could learn about such as grounded theory, anthropology, social sciences, humanities, creative arts and the list goes on.  There are some great testers who have learnt these things and should we prevent them for working as software testers because they have no desire to learn code?

I will finish this on a positive note and that it was great to chat with Simon even if our views are slightly different and that he is a very thoughtful and  passionate person.  I look forward to meeting up again sometime  in the future and finding another topic to discuss.

PS Thanks to Rob Lambert for being the referee!!

(edited some of the grammar :o( )

Thursday, 8 November 2012

Some good ideas to aid testing in 7 sentences

I was asked to do a small talk in the community hub at Eurostar 2012 test conference on the following topic:

The best test technique in seven sentences

which I changed to the following

Some good ideas to aid testing in 7 sentences

Since it might be best just for now.....

Some people have asked I post the presentation somewhere so here it is.



DISCOVER

...all the information you can, both verbal, non-verbal and written.

EXPLORE

....the system to learn about what it actually does rather than what you think it does.

TALKING

....and communication is important, you need to talk to everyone.

EXAMINE

..all information provided or uncovered by exploring this is your evidence.

CONTEXT

...is crucial, all your testing should be driven by the context at that time.

THINK

...and engage your brain, testing is all about thinking.

D.E.T.E.C.T

Act like a detective and DETECT your bugs

Tuesday, 30 October 2012

Making the most of the conference experience


FOR THOSE ABOUT TO CONFERENCE… WE WILL TALK TO YOU…..

Since it is so close to the Eurostar conference in Amsterdam I thought I would post a conference related blog. #esconfs @esconfs -

I had planned to do this blog article a few weeks ago when I posted on twitter about the point of conferences is not all about the tracks but about the conferring. Many thanks to Rob Lambert The social tester for spurring me to complete this post together with his great A quick guide to Eurostar 2012

What?  You have not read it yet!!!  Then please stop reading this and go and do so…… (There Rob plug over with I expect the cheque to be in the post tomorrow or at least buy me a drink at Eurostar!)

This article is not going to be a “you should do this or should do that” and follows some of the messages that Rob states in his article. I remember my first conference (many, many years ago) and how it can be very intimidating.  It is even worse if you are little shy or a quiet type.

The best value you can get from a conference is not from the tracks or the keynotes but from talking to other people and conferring.  You soon learn that the problems you face in your day to day job are similar to others. This may help you to feel that you are not alone and that the issues you face are normal.  It is so easy within our profession to become insular and trapped in our own bubble.  Conferences gives you a great opportunity to see that the issues you experience are common and others are facing the same problems.  This is a great part of attending conferences to discuss these issues with others and sometimes you may discover solutions which could help you resolve the problems you are facing.

There are many ‘strong’ characters in the software testing community and some can appear to be very intimidating.  Try not to be scared and approach them, maybe just listen to begin with; this is what I did at my first few conferences and I felt too nervous or inferior to even think about taking part in the conversation.   It was at one conference when, what I would consider, a high profile figure within the testing community  asked for my thoughts about a topic I was listening to with a group of people.  So I said what I thought and the person took what I said and said this was a great example of the point being made.  This made me feel good and that I had something worthwhile to say.  This one encounter at this conference changed my whole perspective of the testing community and spurred me to start writing this testing blog.  So the conference can change you and encourage you in unexpected ways.  (Thanks Michael B – your encouragement has led to all of this from your keen observations)

At the conference lookout for non-conference social meets they are sometimes advertised on twitter.  You are not on twitter?  Please do join twitter it is a great way to interact with the whole testing community worldwide. I am @steveo1967 on twitter if you want to find me.  Others are word of mouth so it is important to socialise with people at a conferences.

Do not attend every single track at the conference doing so will exhaust you and place you under a lot of pressure.  Take some time out to reflect on  what you have attended maybe even write it up on a blog or as notes to follow up at work.  Even better take some time out to visit the expo or the great sights around the venue.  A conference is not about the number of tracks you attend or making sure you fill all your time with attending lectures.  I tried to do that for the first few conferences I attended and it did not work I had no time to reflect on what had been said and forgot so much useful information.

I have met many wonderful people at conferences and some of them have gone on to be close acquaintances that I have regular contact with even inviting them to my home.  All it takes is a little courage to try and get involved; this is so difficult to do for some but very much worth it.

I would say to that who are regulars at conferences if you spot someone on their own please try to approach them and introduce yourself. At the Eurostar conference this year there is a new concept called the community hub,  which is being hosted by Peter Morgan.  I recommend that you come along and practice doing a little socialising.

ENJOY YOURSELF – it seems strange to say that but I have seen so many people come to a conference with a unhappy face – they have come because they have been told to or feel like an outsider.  Make the most of the conference by taking part and becoming a part of the community.  The testing community can be intimidating for an outsider but if you take those first wobble steps to becoming a part of the community then it welcomes you with open arms and lots of support.

You never know you may meet someone at the conference who changes your life or at least makes you think in a different way.

In memory of Ola Hylten whom I first met at a Eurostar Conference






Thursday, 9 August 2012

Testing RESPECT


Whilst researching for a recent blog article on science v manufacturing and testing I came across an interesting article about scientific standards called the RESPECT code of practice and I made a mental note to come back to this since I thought it could have some relevance to testing. The article can be found here and a PDF version of the code can be located here:

The purpose of this article is to look at each of the statements made about what socio-economic researchers should endeavour to and my thoughts on how it may apply to testing.

The first paragraph is the one that drew me to the article in the first instance.
Researchers have a responsibility to take account of all relevant evidence and present it without omission, misrepresentation or deception.
It is so interesting how this is closely related to the responsibility of the tester when carrying out testing.  We have a duty to ensure that ethnically and morally we provide a service that meets these responsibilities. 

The bit that stood out  within the main body of text was the following statement
does not predetermine an outcome
I still find that within the field of testing there are still people writing scripted tests in which they try to predict the outcomes before they actual experience using the application.  This is not testing, testing is exploring the unknown, asking questions and seeing if there is a problem.

Now if we look at the last line of the paragraph
Data and information must not knowingly be fabricated, or manipulated in a way that might lead to distortion
Hmmm? Anyone want to start a discussion on testing metrics?  Cem Kaner talks about validity of metrics here

Then the article gets into the reporting of findings.
Integrity requires researchers to strive to ensure that research findings …. truthfully, accurately and comprehensively…have a duty to communicate their results in as clear a manner as possible.
I get tired of seeing time and time again shoddy or poorly documented testing findings/bug reports.  In my world exploratory testing is not an excuse for poor reporting of what you did and what you found

The most exciting part of the article was the final paragraph in which they realise that as human beings we are fallible.
…no researcher can approach a subject entirely without preconceptions 
It is therefore also the responsibility of researchers to balance the need for rigour and validity with a reflexive awareness of the impact of their own personal values on the research
It is something within this blog that I talk about a lot the need to understand that we have our own goals and views which could impact and influence our testing.  We owe it to ourselves to try and be aware of these sometimes irrational and emotional biases. 

The following is my attempt to go through each of the statements made in the article and provide my own personal view (with bias) or some external links in which others within the testing community have already discussed.

a) ensure factual accuracy and avoid misrepresentation, fabrication, suppression or misinterpretation of data

See previous link to article by Cem Kaner on metrics, Also by Michael Bolton here  and here by Kaner and Bond

b) take account of the work of colleagues, including research that challenges their own results, and acknowledge fully any debts to previous research as a source of knowledge, data, concepts and methodology

In other words if you use other peoples articles, ideas etc give them some credit.

c) critically question authorities and assumptions to make sure that the selection and formulation of research questions, and the conceptualisation or design of research undertakings, do not predetermine an outcome, and do not exclude unwanted findings from the outset

STOP accepting that because it has always been done this way then that means it is right.

d) ensure the use of appropriate methodologies and the availability of the appropriate skills and qualifications in the research team

Interesting one, I do not take this as meaning to get certified, other people  may.  I take it that we have a responsibility to ensure that everyone we work with has the relevant skills and if they do not mentor them and support them to obtain these skills.  Encourage self-learning and look at all the available approaches you can use for testing and select the one most suitable for you.

e) demonstrate an awareness of the limitations of the research, including the ways in which the characteristics or values of the researchers may have influenced the research process and outcomes, and report fully on any methodologies used and results obtained (for instance when reporting survey results, mentioning the date, the sample size, the number of non-responses and the probability of error

In other words be aware of both your own limits and project limits such as time, money or risk.  Testing is an infinite task so when reporting make sure it is clear that your sample of ‘tests’ are very small in comparison of all the possible ‘tests’ you could do.

f) declare any conflict of interest that may arise in the research funding or design, or in the scientific evaluation of proposals or peer review of colleagues’ work

Does this apply to testing?  If you are selling a tool or a certification training scheme then this should be stated clearly on any material you publish regarding testing.

g) report their qualifications and competences accurately and truthfully to contractors and other interested parties, declare the limitations of their own knowledge and experience when invited to review, referee or evaluate the work of colleagues, and avoid taking on work they are not qualified to carry out

To me if you stop learning about testing and act like one of the testing dead (see article by Ben Kelly – here) then you are not qualified to carry out testing.

h) ensure methodology and findings are open for discussion and full peer review

Do not hide your testing effort inside a closed system in which only the privileged few have access.  Make sure all your testing effort is visible to all within your company (use wikis)

i) ensure that research findings are reported by themselves, the contractor or the funding agency truthfully, accurately, comprehensively and without distortion. In order to avoid misinterpretation of findings and misunderstandings, researchers have a duty to seek the greatest possible clarity of language when imparting research results
  
In other words make sure that what you have done when testing is what you report and that you report clearly and without ambiguous facts

j) ensure that research results are disseminated responsibly and in language that is appropriate and accessible to the target groups for whom the research results are relevant

Make sure that all relevant parties have access to your findings, communicate, talk, discuss.  As stated earlier do not hide your findings publish them for all to see warts and all.

k) avoid professional behaviour likely to bring the socio-economic research community into disrepute

We all have a duty as testers to be professional in our behaviour and this means even when we disagree we still need to respect each other’s view and be able to participate in a debate without making others feel inferior.

l) ensure fair and open recruitment and promotion, equality of opportunity and appropriate working conditions for research assistants whom they manage, including interns/stagiaires and research students

Employers and recruitment agencies STOP using multi-choice certification schemes as a filter for working in testing.  Holding one of these certificates do not mean that you can test.

m) honour their contractual obligations to funders and employers

This is a given no comment needed on this.

n) declare the source of funding in any communications about the research.

If what you are publishing is in your own self-interest or a vested interest in which you can receive funds then please be honest and up front about this.  As professionals we can then make an informed decision about the content.

The context driven testing school has a list of principles here and it is interesting to compare the two there appears to be some overlap but maybe we could improve the context driven one by using more of the RESPECT code of practice.  What do others think?  A good starting point maybe?

Thursday, 26 July 2012

Is testing a manufacturing process or a scientific approach?


Question
Does testing need to move away from manufacturing processes and more towards scientific approaches?


If we take a look at the world of manufacturing you can find many standards (ISO and others) for various everyday things.  However even within the same subject field there can be many competing standards.  For example look at the standards for mains electricity around the world.  Which standard do you use? 




What about light fittings in your country how many different types are available?




In some cases these standards have been changed to meet local and cultural differences in others it has been a question of business and not wishing to pay a royalty fee. However in all these examples context plays a part in the adopting of a standard. Maybe I am being unfair and using examples that are not typical.  From the research I have done I find the same within the QC field, depending on what is being manufactured the processes and standard can and do change for similar products. 

The xkcd comic strip shows this very clearly.





The problem I have within the domain of software testing is that when we apply manufacturing processes to software testing we are making many assumption. The biggest one being


 all software is the same and behaves the same 


which is not true. 


As such we require different approaches to deal with this which is in context with the software being developed rather than trying to make it fit to processes designed around the assumption that everything will be done exactly the same way.

My concern is with the new ISO software testing standard ISO/IEC 2911  which appears to be based on manufacturing process and practices.

The question I have is why? 

Software testing in my opinion is not a manufacturing driven process but more a scientific experimental approach in which the tester has questions and theories that they wish to prove or disprove.So why try and tie a thinking process down to a checklist style document driven process?  Is it because 'management' can have a false belief that it more easily to manage or am I being sceptical?

Looking at the field of science and research - yes I acknowledge there are still some processes but these seem to be based upon what has been done and experienced rather than up front unnecessary documents.  I see very little about processes more a case of techniques and approaches to use. 




If you then add in other fields of science such as social and anthropology you have a wide range of approaches that lend themselves to testing. I see testers more as scientists, researchers or investigators looking at debunking their theories, biases and understanding and trying to learn more about what they are testing.

One approach I stumbled across was the following: http://www.respectproject.org/code/cstds.php?id and I think this could lead to another blog article there is so much useful information here for testers.



Quoting a couple of them:

  • ensure factual accuracy and avoid misrepresentation, fabrication, suppression or misinterpretation of data
  • critically question authorities and assumptions to make sure that the selection and formulation of research questions, and the conceptualisation or design of research undertakings, do not predetermine an outcome, and do not exclude unwanted findings from the outset 
  • ensure methodology and findings are open for discussion and full peer review

I found this code of practice very similar to the ethics I try to employ when carrying our exploratory testing, as I have already stated I may come back to this with another article at a later date.

I remember in school during science lessons we would:


  • Start from a theory
  • We would then test that theory
  • Write a conclusion based upon what we did 
  • Report what we found.

In my mind this is like exploratory testing

  • We have a charter and a mission
  • We try to prove the mission right or wrong
  • We write about our discoveries and what we learn
  • We report what we find
  • We check to see if what we find matches our original mission


We prove sufficient information for our peers to be able to replicate what we did and see if they come to the same conclusions or not.  We do this by treading similar ground but sometimes not exactly the same steps.

The similarities between exploratory testing and scientific research appear to be many.


  • Formulate a theory
  • Test the theory
  • Explore ways to prove the theory is incorrect (Peer testing)
  • Report your findings

So to conclude I feel that exploratory testing and testing in general has more to do with the sciences than it has to do with manufacturing.  The sooner we can move testing away from a manufacturing process centric way of working to a more natural scientific way of working the better the world of testing will be.  We really must stop testing being forced into a process in which people can tick a box and say yes we have a document for that and for that and that and of course that.   


Testing is a thinking activity not a document creation, ticking boxes checking process.