Showing posts with label models. Show all posts
Showing posts with label models. Show all posts

Monday, 19 October 2015

MEWT4 Post #2 - A coaching model for model recognition - Ash Winter

Abstract:

As part of my role, I often coach testers through the early part of their career. In this context I have noted a pattern in the application and interpretation of models. They are generated internally through various stimuli (learning, influence of others, organizational culture) and then applied subconsciously for the most part, until there is sufficient external scrutiny to recognize them. To this end, I have created a model of questions to help testers to elevate their internal models to a conscious level and begin to articulate them.

To this end I hope to articulate at MEWT:

  • Presentation of the model of questions to determine internal models in use, without introducing models explicitly.
  • Use of Blooms Taxonomy to visualize a coachees modelling paradigm and the steps towards modelling consciously.
  • Practical examples of using this model to assist early career consulting testers to cope with new client information saturation.
Slides for the talk by Ash can be download here - https://mewtblog.files.wordpress.com/2015/10/coaching-model-for-unrecognised-internal-models.pptx

____________________

The first speaker at Mewt was Ash Winter who talked about his experience of coaching and how coaches have their own internal models which still could be wrong.  Ash talked about the issue he and other coaches have experienced with using models and the risk that they can limit your thinking.  He had noticed that some coaches talk about models without really recognizing that they are using a particular model.  This appears to be especially true in the testing domain.

Ash presented a different coaching model based on Blooms taxonom  to provide a framework of asking questions of those you are coaching rather than providing answers.  Ash stated that we should, as coaches, “Build your model on pillars of questions, not answers.  You are coaching”

The levels of Blooms taxonomy can be seen here:



An in-depth look at Blooms taxonomy can be found here.


Ash displayed a different variant of this during his talk:



Ash stated that he felt that Blooms was good for learning and it was useful for coaching as well.  Since Blooms works on the basis that you work towards goals this also then applies to those who coach and utilize coaching models.   

Ash also stated that his model for coaches is for those who are experienced as coaches and who are involved in coaching those who are early in their career as a tester.  As with any other model Ash did point out that he felt this was a new coaching model which was still evolving and emergent and wanted input for the wider community?

During the discussion after Ash has spoken I highlighted that the Blooms taxonomy approach does have some flaws especially in a digital driven learning environment in which we are now situated. 
The hierarchical approach of Blooms does not encourage deep and meaningful learning aided by digital media.

The problem with taxonomies is their attempt to pin down the complexity of cognition in a list of simple categories. In practice, learning doesn’t fall into these neat divisions. It’s a much more complex and messier set of cognitive processes.http://donaldclarkplanb.blogspot.com/2006/09/bloom-goes-boom.html
Issues with Bloom taxonomy further reading:

There are alternative learning models which appear to overcome these flaws in Blooms and maybe mixing them together will provide a more robust model for Ash to work with.

For example:
“Heutagogy is the study of self-determined learning … It is also an attempt to challenge some ideas about teaching and learning that still prevail in teacher centred learning and the need for, as Bill Ford (1997) eloquently puts it ‘knowledge sharing’ rather than ‘knowledge hoarding’. In this respect heutagogy looks to the future in which knowing how to learn will be a fundamental skill given the pace of innovation and the changing structure of communities and workplaces.” https://heutagogycop.wordpress.com/history-of-heutagogy/
Or
“Connectivism is driven by the understanding that decisions are based on rapidly altering foundations. New information is continually being acquired. The ability to draw distinctions between important and unimportant information is vital. The ability to recognize when new information alters the landscape based on decisions made yesterday is also critical.” http://www.itdl.org/journal/jan_05/article01.htm
At the end of the talk by Ash the group felt they needed to go away and think more about the ideas Ash has discussed.

To finish I will leave you with a quote from Ash during the talk:

A lot of people do not know what models are sometimes they emerge during applied practice








Friday, 16 October 2015

MEWT4 Post #1 - Sigh, It’s That Pyramid Again – Richard Bradshaw

This is the first in a series of posts I plan to write after attending the fourth MEWT peer conference in Nottingham on Saturday 10th October 2015.

Before I start I would like to say thank you all the organizers and for inviting me along and a BIG MASSIVE thank you to the AST for sponsoring the event,



Abstract: 

Earlier on in my career, I used to follow this pyramid, encouraging tests lower and lower down it. I was all over this model. When my understanding of automation began to improve, I started to struggle with the model more and more.

I want to explore why and discuss with the group, what could a new model look like?

___________________________

During the session Richard explained his thoughts about the test automation pyramid created by Mike Cohn in his book Succeeding with Agile and how the model has been misused and abused.



Richard talked about how the model has adapted and changed over the years from adding more layers..



...to being turned upside down and turned into an ice-cream cone.


Duncan Nisbet pointed out that this really is now an anti-pattern - http://c2.com/cgi/wiki?AntiPattern.  The original scope of the diagram by Mike was to demonstrate the need to have fast and quick feedback for your automation and as such focused the automation effort to the bottom of the pyramid. Where the feedback should be fast. The problem Richard has been experiencing is that this model does not show the testing effort or tools needed to get this fast feedback.  It also indicated that as you move up the pyramid less automation effort was needed or should be done.  The main issue for Richard was how the pyramid has been hi-jacked and used as examples of the priority of effort should be on automation rather than focus on the priority of both in given contexts.  

Richard presented an alternative model in which both testing and automation with the tools required could be shown on the ice-cream cone diagram.



With this diagram the sprinkles on the top were the tools and the flakes the skills.  He then in real time adjusted the model to say it would be better as a cross-sectional ice-cream cone with testing throughout the cone and the tools across all areas of the original pyramid.  Many attendees liked this representation of the model but some thought that it still encouraged the concept that you do less of certain testing activities as you move down the ice-cream cone.

At this stage I presented a model I had been using internally to show the testing and checking effort. 



Again people thought this indicated that we need to do less as we move up the pyramid and it went back to the original point being made by Richard that the pyramid should die.

After MEWT I thought about this problem and tweeted an alternative representation of the diagram. After a few comments and some feedback the diagram ended up as follows:



With this model the pyramid is removed. Each layer has the same value and importance in a testing context.  It shows that the further up the layers you go the focus should switch more from checking to testing and the lower down the focus should be on automating the known knowns. All of this is supported by tools and skills.  As a model it is not perfect and it can be wrong for given contexts, however for me it provides a useful starting point for conversations with those that matter.  It especially highlights that we cannot automation everything nor should we try to do so.

In summary the talk given by Richard was one of the many highlights of the day at MEWT and inspired me to look further into the test automation pyramid model and its failings.  I agree with Richard that this original model should die especially in the way it is often misused.  Richard provided some useful alternatives which could work and hopefully as a group we improved upon the original model.   Richard did clarify that his ice-cream cone model with sprinkles is not his final conclusion or his final model and he will be writing something more on this in the near future.  His blog can be found here - http://www.thefriendlytester.co.uk/.

Now it is over to you, please provide your feedback and comments on this alternative model.