Page 155 - Introduction to Statistical Pattern Recognition
P. 155
4 Parametric Classifiers 137
(4.44)
and
[SC, + (I-S)C2l v = (M2 -MI), (4.45)
where s stays between 0 and 1 because ql <O and q2>0. Thus, if we can find
V and v, which satisfy (4.43) and (4.45), these V and v, minimize the error of
(4.38) [3]. Unfortunately, since qi and I$ are functions of V and v,, the expli-
cit solution of these equations has not been found. Thus, we must use an itera-
tive procedure to find the solution.
Before discussing the iterative process, we need to develop one more
equation to compute vo from s and V. This is done by substituting q1 and q2
of (4.19) into (4.44), and by solving (4.44) for v~,. The result is
so:VTM* + (l-s)o:VTMI
v, = - (4.46)
so: + (I-s)oZ
The iterative operation is carried out by changing the parameter s with an
increment of As as follows [4]:
Procedure I to find s (the theoretical method):
(1) Calculatc V for a given s by
v = [SC, + (I-s)C2]-I(M* -MI).
(2) Using the V obtained, compute of by (4.20), by (4.46), and ql
by (4.19) in that sequence.
(3) Calculate E by (4.38).
(4) Change s from 0 to 1.
The s which minimizes E can be found from the E vs. s plot.
The advantage of this process is that we have only one parameter s to
adjust. This makes the process very much simpler than solving (4.43) and
(4.45) with n + 1 variables.
Example 4: Data I-A is used, and E vs. s is plotted in Fig. 4-8. As
seen in Fig. 4-8, E is not particularly sensitive to s around the optimum point.