Friday, 9 December 2011

Apprenticeship schemes at Test Conferences

A quick blog on a thought I have had.

I read an article today about how we could try and fix the IT skills gap that exists within the UK, this may also apply around the world, by getting young adults into apprenticeships. I have a view that for some people academia study is not for them and they would better suited to a vocational training course instead of a university degree. I never went to university and as such I do not have a degree. Do I feel as if I have missed out? I do not think so but I have not experienced university life so cannot be sure if I missed out something I may have liked.

I think within our profession of testing we have an opportunity to mentor and help create the next generation of testers (not discounting coders, architects) and allowing them to build their skills and knowledge up by learning from experience rather than studying non relevant subjects at university (How many universities do testing as a degree?) As Nassim Nicholas Taleb has said we as human beings are far better at learning from doing rather than from books. I have over the past year been mentoring two people in our craft of testing one is still on-going the other has managed to secure a tester role within a company, neither have been involved in testing beforehand. I feel we within our community should be trying to do this and encourage young adults by maybe taking them under our tutorage, it does not require a large amount of personal investment, a few hours per week. Or maybe within our companies we should all start looking at trying to introduce apprenticeship schemes, let’s try to tap into this vast resource who in my opinion feels they have been abandoned by the educational system.

On the other side I want to call out to those who run conferences, EuroSTAR, CAST, Lets test, UNICOM and say let’s advertise for young adults who may have an interest to come along as an apprentice for the length of the conference. They would not pay a fee but would be expected to produce a report on their thoughts and what actions they intend to take away for the future. I am have not finalized these thoughts but it would give these young adults to get engagement in a craft which I myself feel very passionate about.

Maybe the organizations that run the conferences could look at running an apprenticeship competition, vetting process. I am sure there are many vocational colleges (Both UK and around the world) who would be willing to get involved in this. It has the added effect that it will start to raise in the minds of the next generation of influential people the value of testing and put testing out there as a forwarding thinking craft that people want to get involved with.

What do others think?

I would especially love some feedback from conference organizers to see how feasible these ideas are.

Thursday, 8 December 2011

Recipe Knowledge

This is my response to an blog article written by Paul Gerrard.

http://gerrardconsulting.com/index.php?q=node/599

I was going to post this as a comment but thought it would be better as a separate blog article

.I am not sure if I agree with entirely what Paul was saying but that is the point of the good blog article. I will say I do entirely agree with his conclusion that we have to have our eyes open and your brain switched on. There are methods that can be used to prevent the ‘quitting’ process and the rambling around in the dark approach to exploratory testing but I think that would be an entirely different article.

However I would suggest people try searching for article on avoiding (or being aware of) bias, cognitive research methods, focusing and defocusing skills. Another one to look at is Air Traffic Control work patterns; they work in time boxed shifts is this similar to session based testing?The point I want to make is the issue Pauls makes about domain knowledge and the usefulness that scripts may bring is an important one.

I am not in the camp that we should abandoned scripts and a lot of the people I communicate with are not saying that either. I feel there are a lot of Chinese whispers with regards to views of some people on the use of scripted tests. I cannot recall anyone saying to me that we must abandon scripts in favour of just doing exploratory testing (Is that a bias and I am deliberately missing or not noticing information?) We also can train ourselves not to quit using a variety of cognitive processes especially the use of checklists and heuristics. These ‘tools’ enable us to counter the quitting instinct but triggering new paths, observations and comparisons.

Testing is not just about finding things it is about asking questions and forming theories based on the answers (evidence) given while experiencing the software. This may lead to more questions and further evaluation and even more re-evaluation of what you already thought debunking and disproving your theories. Finding bugs is a side effect of this approach, a very useful side effect but it is not the sole purpose of testing.

There is a term used within society (especially the social science community) which is ‘recipe’ knowledge, this is often devalued by academics since it is a step by step instruction for learning something. In the everyday world context recipes tell you what to use, what you will need (ingredients) and exactly what procedures to follow, this sound familiar to scripts in the testing world. These recipes can provide important foundations for acquiring or developing skills or as we would say in the software development world learning domain knowledge? People using a recipe as we know may not follow it exactly, they may taste the product and adjust it for their own personal taste so the will move away from the script. However we should not pretend that learning a recipe is the same as learning a skill.

If we for example look at baking, this requires a ‘knack’ which can only come from experience (if you have tried baking bread you will understand this) like qualitative analysis, baking also permits creativity and the development of your own styles.

The skilled tester at some point, like the experienced chef, may stop using the recipe book and start to experiment and explore different tastes and ways to discover more and hopeful improve their skills. At the same time the recipe (script) remains a useful tutorial for the newcomer to the art.

Some of the content used here is taken from the following book:

Qualitative Data Analysis: A User-friendly Guide for Social Scientists - Ian Dey

Wednesday, 7 December 2011

A ‘title’ is of value to someone who matters

Recently I attended the Eurostar Testing Conference in Manchester and came away with some mixed messages and thoughts about the content of the conference. Some of the presentations and tracks were really good whilst others appeared to repeat the same old information. I hope to write a few blog articles on some of the positive messages I got from the conference along with lots of ideas I have with regards to social science and how it can be used within testing, these may have to wait until after the holiday period.

The reason for writing this blog post is due to the, what appeared to be a negative, message coming from some of the key note presentations, this is my opinion and how I understood the messages in the context of my views on testing and testers. The one point I wish to raise (and maybe rant) is one of the messages that James Whittaker made.

“at Google ‘Tester’ has disappeared from people’s job titles. People who were ‘testers’ are now ‘developers’ and are expected to code regularly”

Now my thoughts on this may be taking the point James was making out of context however I am not sure in what other context this could be made.

James during the presentation made the point that testers should be part of the team and not get bogged down in who has what role and I whole hearty agree with that.

However from a social and status perspective people need to be able to identify with a title and there has been a lot of talk within the development community about removing titles especially the title of tester. Take the following scenario:

You go out on a social evening with a group of friends and their partners would you work with a project manager, a developer, a business analyst and a tester, As the evening proceeds each person is asked by a non-team member what they do at work.

The developer could reply: I write code and create applications

The tester could reply that they test to ensure the system works

The project manager could reply that they make sure everyone knows what target they have to meet

The Business analyst could say they provide information on what the customers who will use the application need

Each person answering this question I would say would be proud of their job title and what they do.

So my take on making a statement in which we say get rid of the of the title of tester and call everyone a developer is a little insulting and makes me personally feel unappreciated and unvalued. I feel I have been working as a tester for a long period of time now and whilst I can understand that within a team people can have a variety of roles and responsibilities why should I have to give up something that I feel passionate about? I wonder what would be said if at a developers conference everyone is now going to be called a business analyst since we all provide something that the customer wants.

Why does everyone have to be a developer within a project? My concern is why has the word ‘tester’ become such a dirty word? It is if we should be ashamed of what we are and what our title is.

I AM A TESTER AND PROUD OF IT!

Friday, 4 November 2011

Defining Testing

I am about to run a couple of internal workshops on the Exploratory Testing approach which is based upon a lot of work done by Michael Bolton and James Bach. One of the concerns I have been having recently is what people within the organisation think testing is in comparison to what they actually doing. So I started to put together an article looking at these concerns and trying to see if there is a problem. This blog is based upon some of the points that I cover in the article.
WARNING
Disclaimer:

The views and definitions expressed in this article are my own and as such they may not match what a dictionary may say or agree with your views/definitions.

When I start to look at what we see as testing activities they appear to fall into three distinct categories:
  • Validation
  • Verification
  • Testing

These terms may be familiar to some of the older readers of this blog. V V & T has been around for a long time and has it origins within the manufacturing industry. It has been the main process to provide quality control and assurance of manufacturing production lines. (http://en.wikipedia.org/wiki/Verification_and_validation)

It appears that these ‘manufacturing’ processes have been applied to software testing (http://en.wikipedia.org/wiki/Verification_and_Validation_(software))

This seemed to have lead to the appearance of process standards initially the ISO 9000 Quality Assurance standard, which was modified to become the ISO 9001 2008 standard which included software. These standards are very closely linked to manufacturing process and from a software testing perspective the quality control methods.

Talking to and observing various companies I have seen that lot of people’s perception of testing is as shown in the photo below.

http://brigitteofseon.wordpress.com/category/work-hard-no-go-slow/

Is it a problem to have this perception of software testing?

At the beginning of my career in software testing a lot of companies started to change from mainly hardware manufactures to both hardware and software manufactures. There was a need among these companies to have processes they could use to prove the ‘quality’ of their software products and the general consensus was what had worked in quality control for hardware surely could be applied to software testing.

The reasoning behind these was based upon some fairly flawed assumptions:

  • All software was the same
  • All software worked in the same way
  • All users would follow the designed work flows.
  • All users would behave in the same way

The main focus of these processes was to validate and verify what was already known about the product and its expected inputs and outputs. In my opinion following quality control and assurance processes is not really about testing Testers ‘normally’ do not control the quality (Yes there are approaches such as TDD which ‘may’ help). If there is crap in the system you are testing then there will be still crap in the system afterwards. Testers provide a service telling you there is crap in the system. Michael Bolton talks more about getting out of the QA business here

Validation and verification

will in the majority of cases NOT

Tell us anything new about the product

Make us ask questions of the product

So what do I mean when I talk about validation and verification?

Validation:

To me validation is about proving what you already know about the product. Confirming what the requirements say are correct and that the system is correct in accordance to what you believe it should be. The normal response when validating will be:

  • true or false
  • yes or no
  • 0 or 1

I see Validation as a checking exercise (See article by Michael Bolton here on testing v checking) rather than a testing exercise, it still has some value within the testing approach but it will not tell you anything new about the system being tested. It will prove that what you already know about the system is correct and working (or not working) this is like testing requirements or validation of fields in a database/GUI – you know what the input is you know what outputs you expect according to specification/requirements so why not automate this?

The majority of ‘testing’ I see happening is validation and even though it has some value I would not count validation as testing since it does not tell you anything new about the system you are testing.

It should be noted that to interpret the results from validation ‘testing’/checking requires human interaction to work out if what happens what the correct expected response.

Verification:

When I look at the term verification I use it for when we are verifying any bugs that have been previously found. Someone has made a change to the product and I want to verify that the change made has fixed the problem I had seen before. Some verification tests can be automated – for example if you have run a test previously and found the problem you may be able to automate the steps you followed from testing so that you do not need to run the same test again.

Testing:

I see testing as a thinking exercise in which you need a person to use their skills (and brain) to ask questions of the system being tested. From asking this question they learn more about the system and its behaviour. They will not know the answer to the question but by investigating and tinkering with the product they can form some reasonable answer to the question they posed.When testing we act like crime investigators – you suspect foul play, but you need to ask questions and gather evidence to back up your theories and provide answers to your questions. Testing is not based upon requirements or specification but rather when the specification and requirements are not saying.

Testing is about asking the

  • What
  • Why
  • How

Nassim Nicolas Taleb came up with the following interesting quote:

We are better at doing than learning. Our capacity for knowledge is vastly inferior to our capacity for doing things – our ability to tinker, play, discover by accident. http://www.fooledbyrandomness.com/notebook.htm

So after all of this is there really a problem?

Some of the problems I see within the software testing industry are:

  • We spend far too much validating rather than testing
  • We repeat the same validations (manually) time and time again
  • Cover less of the system by only repeating the same validations
  • Testing is checking exercise rather than a testing exercise
  • Testers are not being engaging
  • Testers are not being challenged
  • Testers do not need to think
  • People see testing as a boring thing to do
  • Testers if used to manually validate are seen as robots

What can be done to improve this?

  • Look to automate the validation (checking stuff)
  • Improve coverage by changing the data sets used in validation
  • Start to use exploratory testing approach – attend a rapid software testing course
  • Look at using Session based testing
  • THINK engage your mind and question the system.

We need to keep learning about testing and do more testing rather than keep repeatedly validating systems, it then becomes a much better and challenging role to be a tester.

Monday, 24 October 2011

If Testers were Paranormal Investigators

Image: Witthaya Phonsawat / FreeDigitalPhotos.net
http://www.freedigitalphotos.net/images/view_photog.php?photogid=3116


I thought considering it is getting close to the spooky time of year (All Hallows Eve) I would put together a tongue in cheek article about what would happen if exploratory testers were paranormal investigators.

It was a dark moonlit night as the certified and exploratory tester approached the dark imposing building that their project manager had asked them to look at. The project manager wanted them to report back on if the building was suitable for him to move into and that there were no hidden surprises. They had heard stories that the building was full of bugs and other scary stuff.
The testers used the (pass) keys to enter the building and slowly walked into the main hallway of the building. As they started to walk around the room suddenly got cold as the temperature dropped.

The certified tester says it must death being down the temperature.

The exploratory tester looks around the room and notices that there appeared to be a draft coming from just outside the room. They go to explore the draft since it has now interested them as something that could answer a question. Once outside the room they notice that one of the windows has come open due to a broken catch. They close the window and make a note to contact a handy man in the morning. The temperature of the room returns back to normal.
The testers slowly move towards the kitchen when all of sudden an overwhelming disgusting smell overpowers their senses.

The certified tester is certain that this is the smell of death coming to get them.

The exploratory tester is not sure and starts to use their sense of smell to see if they locate where the smell is coming from. They notice that the smell appears to get stronger in the direction of the fridge. They open the fridge door and note that the fridge does not appear to be working (even noticing that it is plugged in) inside the fridge there is a bottle of gone off milk which appears to be the source of the smell. They make another note to contact a fridge repair person in the morning.

The testers now move to the next floor in the building when suddenly they appear to see something move on the stairwell.

The certified tester is sure that this is a sign of sprits from beyond the grave.

The exploratory tester takes a moment to think about possible reasons for the movement before realising that the window has come open again causing the light fitting to swing and cast different light patterns on the stairwell giving the impression of movement.

They now move towards the main bedroom on the first floor and suddenly they hear unnatural sounds and what appears to be a creature from another world.
The certified tester is certain that this is souls from the other side warning them to leave and now starts to panic.

The certified tester is not so sure and even though they are starting to get a little scared they open the door to the main bedroom and the noise gets louder and louder. With their heart pounding they enter the room and see a large irregular shaped mass on the bed from where the noises are coming from. Slowly they move forward get closer, closer still and even closer……..


Suddenly the mass moves and the exploratory tester notice that it is their project manager fast asleep and snoring, the snoring which is causing the supernatural noises.

THE END.

Loosely based upon the following article: http://www.wired.com/magazine/2011/09/pl_screenghosthunters

Disclaimers:
• All characters appearing in this work are fictitious. Any resemblance to real persons, living or dead is purely coincidental.
• I wish to stress that this blog in no way endorses a belief or non-belief in the occult or anything of a supernatural nature.

Thursday, 20 October 2011

Sat Navs and Maps

The following blog article is based upon a lightening talk I gave at the Software Testing Club Meet Up in Winchester on the 19th Oct 2011.

I have recently been on holiday touring the Yorkshire dales, moors – covering over 1000 miles in one week. The car is fitted with a sat nav which is great when we want to get from A to B but also I have in the car a large scale map of the UK. I started to think about how we use these two ‘tools’ and how this could be used within testing to show the difference between following a set of instructions (scripts) and exploring the countryside (ET)

An example is that both have the same goal (mission) I want to get from A to B. However we use Sat Navs to show us the most direct, quickest, fastest way (some Sat Navs do now have an option for scenic route)

So I set off into the Yorkshire moors using the large scale map (my wife being the navigator), we knew where we wanted to end up, but the route we took was through the back of beyond. (In fact one the roads we ended up on did not even appear on the Sat Nav map (saying we must return to a digitized area – bug?) We explored the areas and when we noticed things that appeared interesting we took a detour and explored these areas. It was wonderful experience and we found places of interest that were outstanding in natural beauty along with all the seasons in one day (sun, rain, hail). At the end of the journey we had discovered some great things but still ended up at the place we wanted to be, yes it took more time (slightly) but we found out more.

My point is that if you stick to using the sat nav you end up at the same place but you may miss so much that is interesting. Now can we compare this to testing? Yes a script ‘may’ be useful from getting you from A to B but how much will you discover, how many surprises will you find? Yes I could repeat the same journey again since we have the map and know the route I took. Would I want to repeat the exact same route? I am sure that if I went to that area again I would be tempted to go a slightly different way since there could be things around the corner that may interest me.

Rob Lambert pointed out the following to me:

“I find the sat nav is a safety and re assurance aid also in that i can explore but then turn the sat nav on, or refer to it, to then return to a known route.”

I would question this is the sense that it could lead to a false sense of security. What happens if the map gets corrupted, or the electronics fail? I would tend to think of the paper map as the safety, reassurance rather than the sat nav which may have a tendency to fail.

With regards to the meet up in Winchester - I wish more people would have come along they missed a great evening of testing discussions with Michael Bolton being on top form. There are plans to have a regualr bi-monthly meet up in Winchester in the near future - watch out for an announcement via the software testing club soon.

Sunday, 18 September 2011

Risky Business

Within the testing profession we are all aware of risk and in the majority of cases we adjust our testing based upon risk. Is this the wrong approach to take? What models do you use to assign risk to elements within the project?

In my experience in most situations the risks we apply are based upon things we known could go wrong or disrupt the testing we are going to carry out. Most risk assessment is done before hand and upfront. It is normally based upon the probability of what could occur to a system based upon someone’s experiences, viewpoint and biases at a given time. I am not sure this is the correct approach to take within testing.

Testing is not an exact science there are some elements where we can predict the outcomes and risks, yet there are far more where it is unpredictable. The thoughts behind this blog post are to look at this unpredictability and how we can try to include that into our testing approach.
Nassim Nicholas Taleb (1) within his book The Black Swan (2) talks about the highly improbable and its impact on the stock market. He states that the majority of investments are based upon risk and use models in which known risks are taken into account. What these models do not include are the improbable risks things such as natural disasters (3) or individuals/countries (4) do something that cannot be predicted.

In conclusion Taleb says that most models are based upon using top down predictions using experiences of what has already happened which is a high risk strategy rather than plan against the unpredictable the things that cannot be planned for.

So how can this apply to testing?

How many times within testing have we seen a last minute showstopper, just before go live? Or a showstopper discovered in the live system when some, what appears to be a totally random set of circumstances happen (multi failure of various unconnected components – recent power failure within the USA (5)). Could this have been predicted as a risk? Would people have built this into their models? IMO I doubt this.

Do we need to change the way we use risk within our testing? Taleb talk about using stochastic tinkering (6) which to me is fascinating since it appears to match closely to the exploratory testing approach. As an example look at the following two statements:

Thus stochastic tinkering requires experimenting in small ways, noticing the new or unexpected, and using that to continue to experiment.

The general principle is: Do as little as possible unless the system shows you have to do more, then do only as much as you need to keep the process going.

If we change the wording of these statements so that they apply to testing:

Thus stochastic tinkering requires TESTING in small ways, noticing the new or unexpected, and using that to continue to TEST.

The general principle is: Do as little as possible unless the system shows you have to do more TESTING, then do only as much as you need to keep the TESTING going.

Does the exploratory testing approach (by design or accident) do this already? To me it appears as if by using exploratory testing instead of using detailed, well planned, risk assessed test scripts we are more likely to discover the ‘black swans’

Food for thought…

References: