Rethinking Evaluation


Evaluation, as discussed in Chapter 1, is one of the critical components of an effective knowledge management approach. However evaluation, particularly evaluating training and development programmes, is often considered as the Achilles heel of HR’s work. Evaluation is often an afterthought, rather than something that is designed into training and development interventions from the outset.

With the increasing emphasis on learning in today’s knowledge economy I think that there is a need to reframe our view of evaluation. Below I have reproduced an earlier article of mine in which I argue for the evaluation process to be seen as a valuable source of learning in its own right, rather than as a proving process, or as a way of catching people out.

Evaluating Learning – Achilles Heel Or Valuable Source Of Learning?[*]

To stay successful in today’s ever-changing business world organisations are investing millions on training and development interventions. Statistics from the DfEE indicate that in 2001 organisations spent 23.5 billion on training (this figure includes both off-the-job training and on-the-job training)[4]. But how do organisations know whether they are investing wisely? How do they know whether they are focusing on the right learning needs? and How do they know whether what has been learnt is being applied, developed and shared as part of the daily routines of working life? These are the types of questions that should be addressed as part of any evaluation process.

Evaluating training and development programmes has always been seen as the Achilles heel of HR’s work. Yet with HR now taking on a more strategic role there has never been a better time for HR to grasp evaluation by the throat, using it as a tool to help demonstrate added value to the business. For this change to happen HR will need to work at changing some of the deep-rooted assumptions held about the purpose and anticipated outcomes from evaluation projects.

This article argues that instead of seeing evaluation as a proving process there are benefits to be gained from seeing evaluation as a learning process in its own right. It sets out where the learning points in the evaluation process are and what needs to happen to maximise these learning opportunities.

Intellectually, Evaluation Makes Sense, So Why The Reluctance?

Evaluation, like post-implementation reviews, is one of those tasks that people rarely get over-enthused about. Management developers know that to be a good all-round learner individuals need to allocate time for giving and receiving feedback and for personal reflection – all part of the evaluation process. As Wilhelms (1971) points out, human beings, like all organisms, depend on feedback for their survival. We ponder over how a situation has worked out up to a particular point, what problems we are likely to face going forward and from there work out what is our next move. The feedback gained from learning programmes is vital for planning the next developmental steps, both for individuals, developers, and senior management. If feedback is so important to our ongoing development, and through evaluation we get valuable feedback, why are people so reluctant to invest in this process? There are a number of possible explanations. One is that different stakeholders hold different views on the purpose and expected outcomes from the evaluation process. In some organisations evaluation is perceived as a proving exercise, or a way of identifying failures. In these situations the evaluation process is conducted on the lines of an inspection or audit. Wilhelms points out that ‘ Most evaluators rush in too soon and concentrate too much on the catching of failures.’ If individuals feel that the evaluators are trying to catch them out, it is no wonder that they are unwilling to engage with the evaluation process, or indeed see the value or relevance to them.

An alternative perspective on evaluation is that of evaluation being about improving and developing. Adopting this perspective can help to free people up in terms of what they are willing to contribute to the process.

Another explanation is that having completed a learning project most developers want to move on to their next project. Indeed participants themselves are often so keen to get back to work that they do not want to spend time completing evaluation sheets. ‘ I need time to think before completing this’ or ‘ I have to dash off to . . . I’ll complete it later and post it to you’ are some of the responses that developers receive when evaluation sheets are handed out at the end of a programme. Sometimes these make it back, but often they gather dust in delegates’ in-trays.

One final explanation is that while some individuals are good at reflection others require the stimulus of other people to help them with this process. It never ceases to amaze me when conducting evaluations how much individuals value the opportunity to talk to someone else about what they have learnt on a development programme and what development steps they have taken since. Sadly not all individuals have an opportunity for this type of onetoone learning conversation once a development event is over – a missed opportunity then from an evaluation perspective.

Evaluation – A Politically Sensitive Task

Conducting evaluations requires a politically sensitive approach. Each evaluation needs to take into account the different interests and expectations of the multiple stakeholders with an interest in the learning programme being evaluated; these stakeholders and their interests are likely to include:

  • The Finance Director whose primary interest will be Return on Investment.

  • The Commissioning Manager whose primary interest will be identifying the extent to which the learning programme is helping to address a specific business issue in his or her area.

  • The developers who will be interested in whether or not the delivered solution meets the customer’s expectations, and whether or not they receive favourable feedback on their personal impact as trainers. The developers are also likely to want to focus on their own learning, for example what new techniques, skills or knowledge have they acquired as a result of designing and implementing a particular learning programme.

  • The head of HR who will be interested in whether or not the customer is likely to want to purchase the HR team’s services again in the future.

  • The individual learners who will no doubt have a myriad of personal objectives that they hoped to gain from the learning programme, some of which they will share willingly with others and some of which they keep to themselves.

Evaluators need to be aware of the many factors that can affect the way in which different stakeholders engage with the evaluation process. These factors include:

  • Assumptions held by stakeholders about the purpose of the evaluation – is it for proving or improving purposes?

  • General attitudes towards evaluation – is it seen as a necessary evil, or a means of learning?

  • Previous involvement with evaluation projects – was it an enjoyable and engaging experience, or stressful?

  • Nature of the learning intervention to be evaluated – is it a selfcontained area, or is it more broad-ranging?

  • Clarity and openness about the overall process, including the dissemination of findings.

  • Level of attachment to the learning intervention – a high level of attachment by stakeholders can mean that individuals are less able to be objective with their input.

  • Criteria for participation in the evaluation – is it voluntary or imposed?

The experienced evaluator will be attuned to these different factors and by investing time in getting to know the different stakeholders personally will identify the extent to which these factors apply on any given evaluation.

It Is Never Too Early To Start

Many writers agree that the evaluation process needs to start much earlier in the development cycle than is often the case. The best time to start thinking about evaluation is as soon as the business problem is being teased out. However, evaluations frequently get tagged on as an afterthought, rather than being planned in at the outset of the overall learning project. It is hardly surprising then when having spent thousands of pounds on developing a learning programme developers ask for more money to evaluate it, their request is met with little enthusiasm. It is far better that any additional resources needed to conduct the evaluation are identified at the overall planning stage.

If evaluation is to be seen as a learning tool in its own right then it is particularly important that the evaluation scope and approach is discussed at the outset of the project. Wherever possible, evaluators should try to encourage a partnership approach whereby different stakeholders work together to ensure that the evaluation scope and approach meets their collective needs. Again this helps to maximise the opportunities for learning among the different stakeholders.

[*]For an introductory book on the use of Social Network Analysis see J. Scott. (1991), Social Network Analysis. Sage Publications.

[4]Learning and Training at Work 2000. In Labour Market Quarterly. DfEE Skills and Enterprise Network Publication. May 2001.




Managing the Knowledge - HR's Strategic Role
Managing for Knowledge: HRs Strategic Role
ISBN: 0750655666
EAN: 2147483647
Year: 2003
Pages: 175

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net