Friday, 4 November 2011

Defining Testing

I am about to run a couple of internal workshops on the Exploratory Testing approach which is based upon a lot of work done by Michael Bolton and James Bach. One of the concerns I have been having recently is what people within the organisation think testing is in comparison to what they actually doing. So I started to put together an article looking at these concerns and trying to see if there is a problem. This blog is based upon some of the points that I cover in the article.
WARNING
Disclaimer:

The views and definitions expressed in this article are my own and as such they may not match what a dictionary may say or agree with your views/definitions.

When I start to look at what we see as testing activities they appear to fall into three distinct categories:
  • Validation
  • Verification
  • Testing

These terms may be familiar to some of the older readers of this blog. V V & T has been around for a long time and has it origins within the manufacturing industry. It has been the main process to provide quality control and assurance of manufacturing production lines. (http://en.wikipedia.org/wiki/Verification_and_validation)

It appears that these ‘manufacturing’ processes have been applied to software testing (http://en.wikipedia.org/wiki/Verification_and_Validation_(software))

This seemed to have lead to the appearance of process standards initially the ISO 9000 Quality Assurance standard, which was modified to become the ISO 9001 2008 standard which included software. These standards are very closely linked to manufacturing process and from a software testing perspective the quality control methods.

Talking to and observing various companies I have seen that lot of people’s perception of testing is as shown in the photo below.

http://brigitteofseon.wordpress.com/category/work-hard-no-go-slow/

Is it a problem to have this perception of software testing?

At the beginning of my career in software testing a lot of companies started to change from mainly hardware manufactures to both hardware and software manufactures. There was a need among these companies to have processes they could use to prove the ‘quality’ of their software products and the general consensus was what had worked in quality control for hardware surely could be applied to software testing.

The reasoning behind these was based upon some fairly flawed assumptions:

  • All software was the same
  • All software worked in the same way
  • All users would follow the designed work flows.
  • All users would behave in the same way

The main focus of these processes was to validate and verify what was already known about the product and its expected inputs and outputs. In my opinion following quality control and assurance processes is not really about testing Testers ‘normally’ do not control the quality (Yes there are approaches such as TDD which ‘may’ help). If there is crap in the system you are testing then there will be still crap in the system afterwards. Testers provide a service telling you there is crap in the system. Michael Bolton talks more about getting out of the QA business here

Validation and verification

will in the majority of cases NOT

Tell us anything new about the product

Make us ask questions of the product

So what do I mean when I talk about validation and verification?

Validation:

To me validation is about proving what you already know about the product. Confirming what the requirements say are correct and that the system is correct in accordance to what you believe it should be. The normal response when validating will be:

  • true or false
  • yes or no
  • 0 or 1

I see Validation as a checking exercise (See article by Michael Bolton here on testing v checking) rather than a testing exercise, it still has some value within the testing approach but it will not tell you anything new about the system being tested. It will prove that what you already know about the system is correct and working (or not working) this is like testing requirements or validation of fields in a database/GUI – you know what the input is you know what outputs you expect according to specification/requirements so why not automate this?

The majority of ‘testing’ I see happening is validation and even though it has some value I would not count validation as testing since it does not tell you anything new about the system you are testing.

It should be noted that to interpret the results from validation ‘testing’/checking requires human interaction to work out if what happens what the correct expected response.

Verification:

When I look at the term verification I use it for when we are verifying any bugs that have been previously found. Someone has made a change to the product and I want to verify that the change made has fixed the problem I had seen before. Some verification tests can be automated – for example if you have run a test previously and found the problem you may be able to automate the steps you followed from testing so that you do not need to run the same test again.

Testing:

I see testing as a thinking exercise in which you need a person to use their skills (and brain) to ask questions of the system being tested. From asking this question they learn more about the system and its behaviour. They will not know the answer to the question but by investigating and tinkering with the product they can form some reasonable answer to the question they posed.When testing we act like crime investigators – you suspect foul play, but you need to ask questions and gather evidence to back up your theories and provide answers to your questions. Testing is not based upon requirements or specification but rather when the specification and requirements are not saying.

Testing is about asking the

  • What
  • Why
  • How

Nassim Nicolas Taleb came up with the following interesting quote:

We are better at doing than learning. Our capacity for knowledge is vastly inferior to our capacity for doing things – our ability to tinker, play, discover by accident. http://www.fooledbyrandomness.com/notebook.htm

So after all of this is there really a problem?

Some of the problems I see within the software testing industry are:

  • We spend far too much validating rather than testing
  • We repeat the same validations (manually) time and time again
  • Cover less of the system by only repeating the same validations
  • Testing is checking exercise rather than a testing exercise
  • Testers are not being engaging
  • Testers are not being challenged
  • Testers do not need to think
  • People see testing as a boring thing to do
  • Testers if used to manually validate are seen as robots

What can be done to improve this?

  • Look to automate the validation (checking stuff)
  • Improve coverage by changing the data sets used in validation
  • Start to use exploratory testing approach – attend a rapid software testing course
  • Look at using Session based testing
  • THINK engage your mind and question the system.

We need to keep learning about testing and do more testing rather than keep repeatedly validating systems, it then becomes a much better and challenging role to be a tester.

Monday, 24 October 2011

If Testers were Paranormal Investigators

Image: Witthaya Phonsawat / FreeDigitalPhotos.net
http://www.freedigitalphotos.net/images/view_photog.php?photogid=3116


I thought considering it is getting close to the spooky time of year (All Hallows Eve) I would put together a tongue in cheek article about what would happen if exploratory testers were paranormal investigators.

It was a dark moonlit night as the certified and exploratory tester approached the dark imposing building that their project manager had asked them to look at. The project manager wanted them to report back on if the building was suitable for him to move into and that there were no hidden surprises. They had heard stories that the building was full of bugs and other scary stuff.
The testers used the (pass) keys to enter the building and slowly walked into the main hallway of the building. As they started to walk around the room suddenly got cold as the temperature dropped.

The certified tester says it must death being down the temperature.

The exploratory tester looks around the room and notices that there appeared to be a draft coming from just outside the room. They go to explore the draft since it has now interested them as something that could answer a question. Once outside the room they notice that one of the windows has come open due to a broken catch. They close the window and make a note to contact a handy man in the morning. The temperature of the room returns back to normal.
The testers slowly move towards the kitchen when all of sudden an overwhelming disgusting smell overpowers their senses.

The certified tester is certain that this is the smell of death coming to get them.

The exploratory tester is not sure and starts to use their sense of smell to see if they locate where the smell is coming from. They notice that the smell appears to get stronger in the direction of the fridge. They open the fridge door and note that the fridge does not appear to be working (even noticing that it is plugged in) inside the fridge there is a bottle of gone off milk which appears to be the source of the smell. They make another note to contact a fridge repair person in the morning.

The testers now move to the next floor in the building when suddenly they appear to see something move on the stairwell.

The certified tester is sure that this is a sign of sprits from beyond the grave.

The exploratory tester takes a moment to think about possible reasons for the movement before realising that the window has come open again causing the light fitting to swing and cast different light patterns on the stairwell giving the impression of movement.

They now move towards the main bedroom on the first floor and suddenly they hear unnatural sounds and what appears to be a creature from another world.
The certified tester is certain that this is souls from the other side warning them to leave and now starts to panic.

The certified tester is not so sure and even though they are starting to get a little scared they open the door to the main bedroom and the noise gets louder and louder. With their heart pounding they enter the room and see a large irregular shaped mass on the bed from where the noises are coming from. Slowly they move forward get closer, closer still and even closer……..


Suddenly the mass moves and the exploratory tester notice that it is their project manager fast asleep and snoring, the snoring which is causing the supernatural noises.

THE END.

Loosely based upon the following article: http://www.wired.com/magazine/2011/09/pl_screenghosthunters

Disclaimers:
• All characters appearing in this work are fictitious. Any resemblance to real persons, living or dead is purely coincidental.
• I wish to stress that this blog in no way endorses a belief or non-belief in the occult or anything of a supernatural nature.

Thursday, 20 October 2011

Sat Navs and Maps

The following blog article is based upon a lightening talk I gave at the Software Testing Club Meet Up in Winchester on the 19th Oct 2011.

I have recently been on holiday touring the Yorkshire dales, moors – covering over 1000 miles in one week. The car is fitted with a sat nav which is great when we want to get from A to B but also I have in the car a large scale map of the UK. I started to think about how we use these two ‘tools’ and how this could be used within testing to show the difference between following a set of instructions (scripts) and exploring the countryside (ET)

An example is that both have the same goal (mission) I want to get from A to B. However we use Sat Navs to show us the most direct, quickest, fastest way (some Sat Navs do now have an option for scenic route)

So I set off into the Yorkshire moors using the large scale map (my wife being the navigator), we knew where we wanted to end up, but the route we took was through the back of beyond. (In fact one the roads we ended up on did not even appear on the Sat Nav map (saying we must return to a digitized area – bug?) We explored the areas and when we noticed things that appeared interesting we took a detour and explored these areas. It was wonderful experience and we found places of interest that were outstanding in natural beauty along with all the seasons in one day (sun, rain, hail). At the end of the journey we had discovered some great things but still ended up at the place we wanted to be, yes it took more time (slightly) but we found out more.

My point is that if you stick to using the sat nav you end up at the same place but you may miss so much that is interesting. Now can we compare this to testing? Yes a script ‘may’ be useful from getting you from A to B but how much will you discover, how many surprises will you find? Yes I could repeat the same journey again since we have the map and know the route I took. Would I want to repeat the exact same route? I am sure that if I went to that area again I would be tempted to go a slightly different way since there could be things around the corner that may interest me.

Rob Lambert pointed out the following to me:

“I find the sat nav is a safety and re assurance aid also in that i can explore but then turn the sat nav on, or refer to it, to then return to a known route.”

I would question this is the sense that it could lead to a false sense of security. What happens if the map gets corrupted, or the electronics fail? I would tend to think of the paper map as the safety, reassurance rather than the sat nav which may have a tendency to fail.

With regards to the meet up in Winchester - I wish more people would have come along they missed a great evening of testing discussions with Michael Bolton being on top form. There are plans to have a regualr bi-monthly meet up in Winchester in the near future - watch out for an announcement via the software testing club soon.

Sunday, 18 September 2011

Risky Business

Within the testing profession we are all aware of risk and in the majority of cases we adjust our testing based upon risk. Is this the wrong approach to take? What models do you use to assign risk to elements within the project?

In my experience in most situations the risks we apply are based upon things we known could go wrong or disrupt the testing we are going to carry out. Most risk assessment is done before hand and upfront. It is normally based upon the probability of what could occur to a system based upon someone’s experiences, viewpoint and biases at a given time. I am not sure this is the correct approach to take within testing.

Testing is not an exact science there are some elements where we can predict the outcomes and risks, yet there are far more where it is unpredictable. The thoughts behind this blog post are to look at this unpredictability and how we can try to include that into our testing approach.
Nassim Nicholas Taleb (1) within his book The Black Swan (2) talks about the highly improbable and its impact on the stock market. He states that the majority of investments are based upon risk and use models in which known risks are taken into account. What these models do not include are the improbable risks things such as natural disasters (3) or individuals/countries (4) do something that cannot be predicted.

In conclusion Taleb says that most models are based upon using top down predictions using experiences of what has already happened which is a high risk strategy rather than plan against the unpredictable the things that cannot be planned for.

So how can this apply to testing?

How many times within testing have we seen a last minute showstopper, just before go live? Or a showstopper discovered in the live system when some, what appears to be a totally random set of circumstances happen (multi failure of various unconnected components – recent power failure within the USA (5)). Could this have been predicted as a risk? Would people have built this into their models? IMO I doubt this.

Do we need to change the way we use risk within our testing? Taleb talk about using stochastic tinkering (6) which to me is fascinating since it appears to match closely to the exploratory testing approach. As an example look at the following two statements:

Thus stochastic tinkering requires experimenting in small ways, noticing the new or unexpected, and using that to continue to experiment.

The general principle is: Do as little as possible unless the system shows you have to do more, then do only as much as you need to keep the process going.

If we change the wording of these statements so that they apply to testing:

Thus stochastic tinkering requires TESTING in small ways, noticing the new or unexpected, and using that to continue to TEST.

The general principle is: Do as little as possible unless the system shows you have to do more TESTING, then do only as much as you need to keep the TESTING going.

Does the exploratory testing approach (by design or accident) do this already? To me it appears as if by using exploratory testing instead of using detailed, well planned, risk assessed test scripts we are more likely to discover the ‘black swans’

Food for thought…

References:

Friday, 5 August 2011

Professional Qualifications and Bodies

I saw an interesting tweet from James Bach (@jamesmarcusbach) the other day:

@testingclub What counts as certification? What's a "professional qualification?" Why is schooling confused with education?

Which was in reply to seeing the following post from the software testing club (@testingclub) about a survey of testers?

@jamesmarcusbach you may be interested in the Education for Testers survey results http://www.thetestingplanet.com/2011/07/infographic-education-for-testers/


Whilst the data within the survey may be of interest to some people what really got me thinking were the questions James was asking and within this blog article I am going to attempt and answer some of them from my perspective. It does not necessary mean that my view is correct and I encourage people to debate and correct points that I make, however it is important to remember that the context of this, it is my own personal view of the testing world.

One of the key points that James states to the testing community is that testing is context driven, I feel the answer to these questions are also dependant on context and as such the answers to the questions are context driven.

The first question I intend to try and answer is “What’s a professional qualification?

The context I am using to answer this is within the UK and Europe where they appear to be very well defined.

Professional qualifications in the UK are generally awarded by professional bodies in line with their charters. These qualifications are subject to the European directives on professional qualifications. Most, but not all, professional qualifications are 'Chartered' qualifications, and follow on from having done a degree (or equivalent qualification).
(http://www.wordiq.com/definition/British_professional_qualifications)

However the important point to note here is the word ‘generally’ to me this does not mean all professional qualification are awarded by professional bodies.

So ‘generally’ professional qualifications are awarded by professional bodies – but what are professional bodies? How do you become a professional body? It appears that it is simple to set up a professional body, all you need to do is:

Get a group of people interested in the same subject

Produce a charter which describes your aims and ethos

Have regular meetings

One interesting point that is made about profession qualifications and bodies that I found was:

Membership of a professional body does not necessarily mean that a person possesses qualifications in the subject area, or that they are legally able to practice their profession.

Some professional bodies can be cartel in which anyone who is not a member cannot practice legally in that domain. Examples of this are within the field of Medicine doctors need to register with the BMA and nurses with the RCN to be able to practice.

So professional qualification in this context indicates that you are proficient in your field and some professional bodies only allow you to practice if you continue to keep up to date with current practices and methods and publish new findings for your peers to review. Without doing this you lose your right to practice. IMO this is the direction tester should be going in. We need to be continuing to learn, read articles, publish articles and enter into debates about the course we take.

ISEB and the other certification schemes are ok as a starting point but it is not the end of learning. We need to adapt these schemes so that they are not static and become out-dated as they currently are. The problem comes that for the people who run these schemes to do this would not make it cost effective and as such it is not in their interest to change. This goes against the reasoning for having these ‘professional qualifications’ the bodies that are saying they represent us on a professional level are not adhering to two KEY parts of being a professional body.
  • Protecting its fellow professionals
  • Looking after public interest by maintaining and enforcing standards of training and ethics

Without this happening I have little confidence in the current testing ‘professional qualifications’

Moving on to James question about confusing schooling and education

I find this interesting since seen both sides of the education system (formal schools) having been to school up to the age of 18 and from working within an education system. I think I see what James is getting at. Formal education worked and did not work for me, due to my circumstances up to a certain age I was away from school more than I was there by my own choice I just did not go. Once I did settle into going to school regularly I found it offered me some fantastic grounding in key subject skills (maths, science, history, English) – I really struggled with English and still do according to my wife! It also gave me social skills in being able to share, communicate, listening to others, letting others have their view point which may not agree with mine. I feel lucky in the schools I attended, they may not have been the highest achieving schools but they taught great life skills they I am always thankful for. (Pity think more about league tables than the students). So how does this differ from schooling? The confusion I think comes from the fact that most definitions of schooling see schooling as part of being at school and formal education.

http://www.yourdictionary.com/schooling
http://www.merriam-webster.com/dictionary/schooling

I find this definition worrying since I see schooling as something slightly different. It can mean the education you get at school. However what about ‘home’ schooling, self-schooling? In which you embark on a different style of learning which is not institutional.

The other context here could be that James could be referring to the differing schools of testing. This does not sit right with me and I do have a problem with having different ‘schools’ of testing. I see testing as one big thing not lots of different fragmented schools. Since each school has some strong views and ideas that the others do not agree with we end up in heated debates in which no one side wishes to back down. I am not sure how that helps the testing profession, debates are ok, but constant fighting is not good and at some point a middle ground should be found even if it does not sit easy with all sides. Sometimes it is better for the good of the all rather than for the good of the individual.

My thoughts on these different schools and professional bodies etc. is that maybe just maybe all sides should come together and look at forming a learned society.

What is a learned society?

A learned society is an organization that exists to promote an academic discipline or group of disciplines

http://en.wikipedia.org/wiki/Learned_society

I think this would be a wonderful way forward and maybe the software testing club could be (form) the society? I am not sure nor have I investigated what would be needed but it looks like that they do some of it already publication of articles etc. I would be most interested in what the people at the software testing club think of this and what the general community feels within all of the different schools.

Finally to answer the last question by James:

What counts as certification?

There are many definitions of certification the main being one in which an organization recognise individual/company etc. that meet certain criteria. These criteria could be passing exams, years of experience, publication of articles and so on.

However this really does not ask the question that James asked. Within the survey it shows how many people hold a certification. However as correctly noted by James it does not say which certification. I would have expected this to be much higher. I have many certificates, PAT testing, rugby coach and first aid. None of these are really relevant for my day to day job of testing so I still no sense in the results as they are displayed. However even if it said testing certification which would it mean? ISEB? AST? Etc etc. This one question really stumped me since I could not find an answer that sat easy with me. If I write regular testing article (blog, magazine) and publish should I be certified? If I get my work colleagues to write a report on how competent I am at testing would make me certified? I really do not have an answer for this one and as James did on twitter open up this question to the community.

So the challenge is set:

In your opinion:

What counts as certification?

Is Product Knowledge essential for effective testing?

I might not be blogging or being online as much as I have been this is due to family life, those close to me know what I mean but here is a new article. I do have lots of ideas and thoughts it is difficult finding time to put them together. I will be at Eurostar in Manchester this year and I hope lots of you will be attending.

It has been awhile since I posted a new blog article so here is a new one.

I recently read an excellent article by Paul Gerrard about all testing being exploratory http://gerrardconsulting.com/index.php?q=node/588 and thought it was so good I posted within our company intranet. I got an unexpected reply which made me think about testing and the skills that are required to be an effective tester. This reply is the reasoning behind this blog post.

The reply I got was as follows:

The interesting challenge to the basic idea is that the tester needs to have knowledge , and very good knowledge of how the system is supposed to work. Only with that knowledge in place is it then really possible to 'intuitively' carry out testing that will be good exploratory testing. Without that deep knowledge it turns into 'random' testing, which, while it has its place in a test approach, I’m not sure it could form the bedrock of a test plan.

The challenge then becomes how to get that knowledge to the tester/test team. I can see how for long term projects/products, the tester becomes truly expert in his Component Under Test, but for new things, or new people to that test team, the ramp up time and 'completeness' of such an approach is questionable and a bit difficult to scale.

For sure exploratory has a part to play - but hard to see how its 'all'.

This made me think about product knowledge and is it really essential for effective testing. So I posted the question on twitter:

Interesting discussion about needing 2 have product knowledge 2 do good exploratory #testing and without this becomes random testing. (1/2)

I have my views on this but would love 2 have #testing community opinions, views, counter views on this. Might do blog post on this (2/2)

I got some replies to this very quickly (as I would expect from such a dynamic community) – sorry if the time order appears a little wrong – I wish twitter would let me do this easier than a cut and paste job

Radionotme

@steveo1967 before gaining experience and understanding I'd have agreed that my testing was more random than exploratory.

@steveo1967 I've recently been exposed to Microsoft AX and found that with experience my exploratory testing is becoming more fruitful.

@mgaertne

@steveo1967 @QualityFrog Well, I would argue that if I know how the software behaves, I don't need to test at all, do I? :)

@steveo1967 @QualityFrog "what I believe it should do" is not knowing to me. :)

@QualityFrog

@steveo1967 @mgaertne product behaviour can be observed in testing. It is implementation, not requirement.

@can_test

@steveo1967 if you know how to do Exploratory Testing well then no prior product knowledge is required. Someone on the team needs it tho!

@steveo1967 IMO that is a very good post on ET. I see nothing there that says prior product knowledge required. (cc @paul_gerrard)

Which were very interesting since they were opposing views about the statement made…

@Radionotme stated that he found product knowledge useful to prevent random testing while mgaetne and qualityfrog stated that it was not necessary and that you could learn about the system whilst exploring.

I countered this with:

@steveo1967

@QualityFrog @mgaertne the counter claim made is that knowing how the product behaves is essential to test quicker and save time

By this time @Michaelbolton had joined the debate

@michaelbolton

@steveo1967 @mgaertne @qualityfrog Whether knowledge or belief, where do you obtain it? From /testing/ what you know.

steveo1967 It takes exploration to develop a decent strategy, tactics, and checks. To me,

@steveo1967 One can develop product knowledge, learning through exploratory #testing. Running scripts helps to suppress that learning.

Trying to keep up with all the threads I replied:

@steveo1967

@mgaertne @QualityFrog not really you would still test that what you believe it should do it actually does do.

@michaelbolton they are not saying ET is not useful they are saying it is more effective when you have product knowledge. Do you agree?

@Radionotme interesting experience with some product knowledge making your #testing more efficient would like 2 know more.

@Radionotme was it random because u had no structure? SBTM for example helps to structure ET or is it ur ET skills improved?

@mgaertne @QualityFrog v true but the view expressed states have knowledge of the product and how u expect it to behave is essential

More people started to enter the debate:

@can_test

@steveo1967 last thought: is Jazz music random? To the untrained ear, perhaps it is. To the experienced, you see it takes great skill.

@Veretax

@can_test @steveo1967 What is what we perceive as random, really isn't random at all, just we lack sufficeent perception to see its order?

@steveo1967

@can_test if someone on the team needs product knowledge why can this not be the tester?

@can_test

@steveo1967 I didn't say it _couldn't_ be the tester. I said it doesn't _have_ to be the tester. Financial Svcs jobs are bad about this.

@steveo1967

@can_test very well put about random being undisciplined that is my point ET without discipline is random

@can_test

@steveo1967 that's what it sounds like to me anyway. "Effectiveness" is meaningful in a certain context. Don't blame the tools.

@steveo1967 I think it's about Trust, or lack thereof. That is, I will trust your ET if you are a product expert, otherwise no, its random

@steveo1967 experience in anything increases your testing efficiency. ET usually looks random to those who don't understand it.

@steveo1967 IMO, good testing is about changing your perspective on the system. That's harder when you are the SME too.

@steveo1967 what does "random testing" mean to them? I may have many hypotheses I want to test that are off the main path. Is that random?

@steveo1967 the statement I disagree with is that Exploratory Testing requires prior industry/product knowledge. That's not true.

@javandervlis

@michaelbolton @steveo1967 @mgaertne @qualityfrog You either belief there’s milk in the fridge or you don’t be

@michaelbolton

@steveo1967 In addition, exploration implies that someone intends to discover something. Knowledge can never be known to be complete.

@steveo1967 Biases can't be eradicated, but they can be recognized, controlled, and managed in a number of ways. It starts with awareness.

@WadeWachs

@michaelbolton @steveo1967 I am currently the 'fresh eyes'. Old and new eyes both find bugs, though they can be different bugs. 1/?

@Steveo1967

@WadeWachs @michaelbolton very good point. My concern with knowing product is having bias expectations of behavior and not seeing problems

@michaelbolton

@steveo1967 More effective /for what/? More product knowledge /vs. more what-else/? Heuristic: fresh eyes find failure. #testing

So from this lively debate what can we conclude?

Some people think that you do not need to have any product/domain knowledge to be able to carry out exploratory testing since one of the principals of ET is that you learn about the system as you test. Other people say that you would be just doing random testing if you had no knowledge of the product since your expectations of how it should work can guide your testing.

My own personal view is very much in the middle it has been known for me to say that I can test any product without any prior knowledge (domain or otherwise of the product), however the important word missing there is ‘effective’ How effective is my testing without domain knowledge? Does it suddenly become hit and miss and as stated by @radionotme more random. I am currently working on a product which is very niche without understanding how certain packets are formed and transmitted you could spend a lot of time testing unnecessary stuff (there is a counter to this that no testing is unnecessary – in that it exercises the system in nonstandard ways – true but doing too much of this soon makes it less effective)

So to conclude I do not think there is an obvious answer to this. In some cases I feel domain/product knowledge could be essential to make the testing efficient. l it does not mean that a competent tester could not learn this domain knowledge very quickly and start to be effective and efficient at testing the product. However it needs to be recognised that when someone joins a team and comes without domain knowledge there will be some ramp up time for them to become familiar with the domain. In my opinion this where exploratory testing comes into its own, as an approach to use for someone to get on board with a system and learn about the system it is the most effective way especially if you can afford to do paired exploratory testing.

Domain/product knowledge is not essential to do effective testing but it can certainly help.

Sunday, 29 May 2011

A Competent Tester

You can teach a student a lesson for a day; but if you can teach him to learn by creating curiosity, he will continue the learning process as long as he lives. ~Clay P. Bedford

I started writing this blog article in draft about a month ago to have a little rant about how certification does not make you a competent tester based upon my experiences and frustrations whilst trying to recruit testers. I was prompted by a post by Rob Lambert to revisit the article and try to complete it.

Recently I have been trying (and I emphasis the word trying, it has been really trying and challenging to find the right people) to recruit testers to work with me on some quite technical projects. None of the projects have any real UI interfaces nor are they web based applications, this is real hardcore technical testing at a binary level.I became very frustrated and even got to the stage that I felt my standards were too high.

I lot of the CV and candidates that were interviewed all stated that they had ISEB or ISTQB certification and from that I assumed they would have a basic grasp of boundary and edge cases. How wrong was I!!! To make it worse I asked about their views on any of the current new approaches and techniques in testing, even having to prompt to what do you think of context driven testing and all I got back in return was a blank look. I asked what articles or books have you read recently about software testing or any book that you could relate to software testing and again all I got was blank looks and shrugs of shoulders.

I am not against certification in any shape or form and I do not have a view if people try to make money out of it that is not the issue. The issue I have is that in some (most) cases these schemes are being sold on the basis that once you have completed them you are now a ‘skilled’ tester and know everything there is to know about testing.

PASSING A MULTI-CHOICE EXAM DOES NOT MAKE YOU COMPETENT!!!!!

Rob Lambert in his article explains in great depth what makes testers ‘skilled’ and I have to agree with him. Those who read my blog know I have a major interest in the social sciences and psychology and how this can collate to testing. More importantly how it can help make you a better tester.

So if you want to become competent at testing you have to read more, interact more with the testing community, become self-learning. My ethos is always to keep on learning and never stop doing so.

Taking any of the certification courses can be, for someone new in the world of testing, a good STARTING point, grounding in SOME of the techniques and skills. These courses will NOT teach you about how testing fits into Agile and about exploratory testing. Nor will they teach you how to test and make you think ‘outside the box’ Only getting involved within the testing community will do that. I good start is the Software Testing Club – read some of the articles and blogs there, subscribe to the excellent RSS feed. I am not being paid by the Software Testing Club to say this I am promoting it because again it is a good starting point for those would wish to learn more about testing and want to improve so they can become competent testers.

I will finish with a quote about learning.


It's what you learn after you know it all that counts. ~Attributed to Harry S Truman