Page 277 - Psychological Management of Individual Performance
P. 277
conclusions 261
framework that can be used to guide the design and implementation phases. Practitioners
and researchers have traditionally adopted Kirkpatrick’s (1976) hierarchy which specifies
four levels of training criteria: reactions, which refers to trainees’ satisfaction with train-
ing; learning, which relates to the acquisition of relevant knowledge and skills; behavior,
or the transfer of trained skills to the job and; results, which concerns organizational indi-
cators such as productivity. Kirkpatrick’s (1976) model is useful in that it highlights the
need to avoid relying on trainee reactions as the sole measure of training effectiveness
and include other less subjective measures in the evaluation. In a meta-analysis of 34
studies, Alliger, Tannenbaum, Bennett, Traver, and Shotland (1997) reported weak cor-
relations between the first three levels of Kirkpatrick’s (1976) model, and a correlation
of only .07 between reactions and knowledge tests conducted immediately at the end
of training. These results reinforce concerns that trainee reactions may be a misleading
indicator of training effectiveness (Druckman & Bjork, 1994; Hesketh, 1997a).
The dominance of the Kirkpatrick (1976) approach has been undermined by an in-
creased awareness of the shortcomings of the model, and in particular the lack of evidence
for the hypothesized causal connection between the four levels of evaluation criteria
(Holton, 1996; Kraiger & Jung, 1996). Consequently, a number of alternative evaluation
models have been proposed. Among these is the model by Kraiger, Ford, and Salas
(1993), in which they argue that evaluation measures should be derived from learning
outcomes that relate to cognitive (or knowedge-based), skill-based, and affective (attitu-
dinal and motivational) domains. Importantly, the learning outcomes should be derived
from the training objectives that in turn follow from the needs assessment, thereby ensur-
ing a close correspondence between what is evaluated and what was learned. (Kraiger &
Jung, 1996). Where possible, measures should be taken over an extended period, as
some training methods which deliver impressive short-term results are less effective in
the longer term than techniques which initially appear less favourable (Schmidt & Bjork,
1992). The availability of post-training discussion groups, via the Web, offers another
way of extending training an incorporating evaluation into training. Carefully structured
questions used as a basis for discussion can provide feedback about the extent to which
ideas and information were acquired from training, while also providing an opportunity
to process more deeply the content of the training course.
In summary, the benefits of conducting evaluations justify the effort required to con-
duct a thorough examination of the short- and longer-term effects of training. Where
practical constraints preclude the use of full experimental designs, quasi-experimental
designs that incorporate measures from a variety of domains will often be preferable
to not evaluating training at all. Further research is required to establish whether newer
models of evaluation overcome the problems associated with the Kirkpatrick (1976) ap-
proach and whether they provide a useful framework for guiding the activities of training
practitioners.
CONCLUSIONS
This chapter has aimed to illustrate the importance of knowing what are the desired train-
ing outcomes and understanding the context in which the trained skills and behaviors
are likely to be performed when designing training. The process of identifying training
needs should involve specifying, not only key knowledge, skills, and attitudes, but also