Page 249 - Psychological Management of Individual Performance
P. 249
phases in the implementation process 233
this number of meetings, it can take 6 to 18 months to develop the system and there is
a serious risk that the design team members may lose interest. This risk is even more
serious for the non-members of the design team in the organisation. In particular, in
situations where the complete system becomes rather complex (i.e., many performance
indicators to cover all key result areas) a dilemma is faced. To solve this dilemma, a
modular design strategy could be followed. For example, Van Tuijl et al. (1997) refer
to a case in the chemical industry where quality performance was the most important
key result area for the operators. The system was installed starting with this module,
leaving aside other key result areas for the moment. Of course, the risk of this strategy
is that other key result areas are getting less attention than needed for overall perfor-
mance. Even for this single module, determining performance indicators turned out to
be a rather complex endeavour. The relative importance of no less than some 25 quality
indicators had to be determined. Eventually, this was done by using a financial criterion.
Feedback was started with this quality performance result area, leading to a dramatic
positive change in quality performance.
In another case referred to by Van Tuijl et al. (1997) a balance between the key result
areas “quality” and “cost reduction” was also reached by using financial outcomes of
weighting the relative importance of these result areas.
Support and commitment of top management remains important during the whole
process of development. Even in the situation where top management delegates the
“approval” of performance indicators, as suggested by design teams to lower manage-
ment levels, the visible support of top management is needed. The results of the study
of Rodgers and Hunter (1991) on the effectiveness of Management By Objectives pro-
grammesillustratethispoint.Commitmentof topmanagementturnsouttobeanessential
factor for performance improvement in the organisation.
PHASE 2: IMPLEMENTATION
After the design the actual implementation of the system can start. Looking at experi-
ences with implementing ProMES systems, first feedback on the designed performance
indicators is given. At this time the system has to prove its intended value: the proof of
the pudding is in the eating. In many projects the reaction of job incumbents to feedback
data give rise to serious discussions (see, e.g., Kleingeld, 1994; Janssen, Van Berkel, &
Stolk, 1995), mainly on the controllability of the performance indicators. Controllability
of performance indicators is rarely 100%. There are always external factors that more or
less influence the scores on the performance indicators. For people to accept the feedback
data as valid they should have the feeling that the performance scores indeed reflect their
efforts to do a good job. In fact, sometimes parts of the design phase and, in particular,
the definition of performance indicators have to be repeated (Kleingeld, 1994).
A very practical issue in the implementation phase is the availability of computerised
information systems to provide the performance feedback data. Although these systems
are have become widely available in organisations and data on performance are collected
even before installing the goal-setting and feedback system, the performance indicators
which have been designed often differ from the existing performance data. Thus, the
reprogramming of computerised information systems is needed. For example, Kleingeld
(1994) reports that it took about five weeks of programming to develop a system that was