Page 162 - Cultural Competence in Health Education
P. 162
140 Cultural Competence in Health Education and Health Promotion
monitor a program ’ s standard of performance, to determine its strengths and weaknesses,
and to decide whether adjustments are needed to improve it. Evaluation measures
whether a program made a difference in people ’ s lives and whether it was cost effec-
tive. Furthermore, evaluation can collect information on the target population that will
allow researchers to develop hypotheses about group members ’ behaviors (Doyle &
Ward, 2001; Taylor - Powell, Steele, & Douglah, 1996).
Health specialists should start the evaluation process at the same time they are
developing the health education and promotion plan. A good way to start is by creating
an evaluation plan that details the evaluation objectives, which are closely related to the
program objectives, the budget assigned to the evaluation activities, and the evaluation
design (Doyle & Ward, 2001). The ideal is to use only experimental designs, where you
have a control group that is not receiving the intervention in addition to the group that
is receiving it; with this design you can prove whether or not your intervention was the
cause of observed change in people ’ s behavior or knowledge. However this type of
evaluation requires amounts of time, money, expertise, and other resources that health
education professionals often cannot afford. Nevertheless, it is still desirable to make
the attempt to collect information at the baseline. With baseline data for comparison,
program providers can show the difference in the target population before and after the
intervention, using evaluation designs that, for example, look at one group pretest and
posttest or that perform static group comparisons (Doyle & Ward, 2001).
When planning the evaluation, it is important for professionals to specify what they
intend to evaluate, how they will measure it, and the expected outcomes given the cul-
ture and characteristics of the target population. Oftentimes health education profession-
als tend to define the success of their programs in terms of standard benchmarks (such as
knowledge gain or development of skills) that do not take into consideration the par-
ticipants ’ perspectives on how the program had benefited them. One way to include partici-
pants ’ views is to use a participatory evaluation approach that involves stakeholders in
planning the evaluation process, so they can describe what they would like to measure
and what types of information they would like to obtain from it (Taylor - Powell et al.,
1996). The following case example illustrates how relevant the target population ’ s per-
spective can be for defining the indicators for measuring program success.
Program staff defined and evaluated their outcome of a bilingual nutrition education
program as nutrition knowledge gained. An evaluation showed little, if any, gains in
knowledge. Upon further probing, it was found that the participants were very satis-
fi ed with the program. For them, it had been very successful because at its conclusion
they were able to shop with greater confidence and ease, saving time. Staff - defi ned
defi nitions of outcomes missed some important benefits as perceived by the partici-
pants [Taylor - Powell et al., 1996, p. 6].
Another key element in planning and conducting the evaluation is to identify the
type of information that is needed to prove the effectiveness of the program and then
to select collection methods and instruments that will focus on that information type.
Oftentimes planners prioritize the collection of quantitative information (numerical
7/1/08 2:54:52 PM
c07.indd 140
c07.indd 140 7/1/08 2:54:52 PM