Evaluating
Training
Outcomes
Objectives –
All to Review the relevance of evaluation Training
Most to review the Kirkpatrick Four Levels taxonomy
and criticisms associated with it;
Some to consider the importance of the balanced
scorecard to delivering a complete picture of
organisational performance.
Key points to consider ..
• Many organization now invest a lot of money on
evaluation interventions .
• Some organizations take the view that the return
may not be worth it (Mcguire,2014)
• Put simply – The primary purpose of evaluation is
to assist organisational decision-making.
• ‘evaluation is a means for understanding the effect
of our actions in a work environment and a
process for measuring and promoting shared
individual, team and organisational
learning’(Torres,1999).
• As James and Roffe (2000) point out, evaluation
should be an ongoing progressive activity,
comparing the actual and real to the predicted or
promised.
Why has evaluation experienced
shortcomings
• The ‘confounding variables effect’ where organisations
refuse to evaluate because of the difficulty of
disentangling training from other stimuli.
• the ‘non-quantifiable effect’ where the effects of training
are difficult to quantify.
• many organisations do not evaluate training due to the
‘cost outweighing the benefits effect’
• the ‘act of faith effect’ occurs when organisations
suppose that training must bring about beneficial effects
and this negates the need for evaluation.
• The ‘trainer sensitivity’ effect discounts evaluation due
to the possible negative feedback that could arise and
affect the confidence of the trainer
( Lewis & Thornhill,1994)
Talking point,
The importance
of evaluation –
CIPD viewpoint
• Evaluating learning and talent development is crucial to ensuring
the effectiveness of an organisation's learning initiatives and
programmes. Effective evaluation means going beyond the
traditional ‘reactions’ focus based on a simplistic assessment of
learners’ levels of satisfaction with the training provision. Rather,
it is important to use simplified yet sophisticated approaches such
as CIPD's ‘RAM’ model (relevance, alignment, measurement) to
evaluate learning outcomes and the extent to which learning
provision is aligned with business objectives. Such a focus helps to
ensure that learning and talent interventions deliver value for
both learners and organisations alike. Practitioners should also
recognise that whilst levels based evaluation typified in Kirkpatrick
and Return on Investment (ROI) dominate our thinking, they are
often poorly used. An output, change and improvement focus is
much more productive. The promise of ‘big data’ and its HR
counterpart, talent analytics, will present new opportunities for
effective evaluation.
• (Adapted from: CIPD (2010) Evaluating Learning and Talent
Development. CIPD Factsheet. http://www.cipd.co.uk/hr-
resources/factsheets/evaluating-learning-talent-
development.aspx#link_cipd_view.)
Questions
• How can the importance of evaluation be
emphasised in organisations?
• What do we mean by the word ‘value’ in the
expression ‘to ensure learning and talent
interventions deliver value for both learners
and organisations alike’?
Key Reminders about
Evaluation Training
• Conducting training evaluations can be
expensive; consequently, it is important to
identify occasions where it is best not to
evaluate.
• For many organisations the cost of evaluation
is not worth the benefit with the impact from
training usually less than 15 per cent
(Brinkerhoff, 2006a).
• Evaluation represents a serious attempt to
understand the process of cause-and-effect
and how training can affect individual
behaviour, group and departmental targets
and organisational efficiency.
Kirkpatrick Model
• Level 1:Reactions: The responses of
trainees to the content and
methods of the programme are
elicited.
• Feedback sheets (sometimes called
reactionaries or ‘happy sheets’),
oral discussions and checklists are
used
• This level constitutes a formative
evaluation.
Kirkpatrick Model
• Level 2:Learning
• The actual learning of trainees is
measured and an assessment is
made regarding how well trainees
have advanced in their level of
knowledge and skills.
• This is achieved through the use of
tests, projects, portfolios and
learning logs.
• This level constitutes a formative
evaluation.
Kirkpatrick Model
• Level 3:Transfer.
• The effect of the training
programme on the behaviour of
the trainee in the workplace is
measured.
• Observation, interviews, critical
incident technique and post-
programme testing are often used
to assess the level of training
transfer.
• This level constitutes a summative
evaluation.
Kirkpatrick Model • Level 4:Results.
• The impact of the training on
the performance of the
employee is examined.
• Workplace metrics (such as
productivity, levels of waste,
throughput) and cost–
benefit analysis can be used
here
• however, it is often difficult
to establish casual linkages
to the improvement
resulting from training.
Talking point Level 1 evaluation
– myths and reality
• The first and initial stage of Kirkpatrick's
Four Levels taxonomy is the Reactions
stage. It is designed to help organisations
assess participant reactions to training
inventions and gauge their feedback on
issues such as the quality of the setting,
instructor, materials and learning
activities. In theory, level 1 evaluation is
meant to act like the proverbial canary in
the coal mine alerting the organisation to
problems and difficulties that are being
experienced. However, several myths
have arisen regarding level 1 evaluation
and we will now examine three of these:
Myths and Reality • Myth: Level 1 evaluation is a
simply a ‘smile sheet’ or
‘reactionaries’ and contains little
or no useful or actionable
information.
• Reality: A key and crucial benefit
of level 1 evaluation lies in its
ability to identify content
sequencing problems, training
delivery and facilitation issues as
well as venue and setting
problems. Speedy detection of
these issues allows them to be
quickly remedied without causing
undue long-term damage to the
training programme itself.
Myths and Reality • Myth: As long as learners are
happy and content, then the
training must have been
successful.
• Reality: The extent of learner
happiness is not a predictor of
overall learning. As the stages
within Kirkpatrick's Four
Levels taxonomy are not
correlated, a high rating at
one level does not translate to
a high rating at a subsequent
level – in other words, just
because a learner is happy
doesn't mean they have learnt
anything.
Myth and Reality
• Myth: Learners are well equipped to
assess the quality, value and relevance of
the training as it relates to their actual
job.
• Reality: Learners are not always the best
individuals to assess the effectiveness of
training transfer – they may not know
themselves what components of the
training will be useful in their day-to-day
jobs. Thus, training transfer should be
carried out by trained evaluation experts
or by line managers supported by the HRD
function.
Questions • Despite large sums of money
being invested in training, only
limited resources are often
devoted to evaluation, with level
1 evaluation being the most
common form of evaluation. How
can the effectiveness of level 1
evaluation be improved?
• Sometimes level 1 evaluation is
often seen as an exercise
conducted by trainers to ‘validate
their worth’. How can the
emphasis of level 1 evaluation be
shifted from being perceived as
an ego-boosting exercise to one
designed to improve the quality
of training design and delivery?