Page 157 - Classification Parameter Estimation & State Estimation An Engg Approach Using MATLAB
P. 157

146                                        SUPERVISED LEARNING

              In order to reduce the sensitivity to statistical errors, we might also
                                                             ^

            want to regularize the inverse operation. Suppose that   is the diagonal
                                                                           ^
            matrix containing the eigenvalues of the estimated covariance matrix C
                                                                           C
                                                              ^
            (we conveniently drop the index k for a moment). V is the matrix
                                                              V
            containing the eigenvectors. Then, we can define a regularized inverse
            operation as follows:
                                               ^

                                    ^
                           ^
                                                      ^ T
               ^  1

                          V
               C C regularized  ¼ V ð1   
Þ  þ 
  traceð Þ  I !  1 V V  0   
   1  ð5:16Þ
                                            N
                                                             ^
                       ^

                                                             C
            where trace( )/N is the average of the eigenvalues of C. 
 is a regular-
            ization parameter. The effect is that the influence of the smallest eigen-
            values is tamed. A simpler implementation of (5.16) is (see exercise 2):
                                                        ^
                                                        C
                          ^  1
                                             ^
                                             C
                         C C       ¼  ð1   
ÞC þ 
  traceðCÞ  I !  1   ð5:17Þ
                           regularized               N
            Another method to regularize a covariance matrix estimate is by sup-
            pressing all off-diagonal elements to some extent. This is achieved by
            multiplying these elements by a factor that is selected between 0 and 1.
              Example 5.1   Classification of mechanical parts, Gaussian
              assumption
              We now return to the example in Chapter 2 where mechanical parts
              like nuts and bolts, etc. must be classified in order to sort them. See
              Figure 2.2. In Listing 5.2 the PRTools procedure for training and
              visualizing the classifiers is given. Two classifiers are trained: a linear
              classifier (ldc) and a quadratic classifier (qdc). The trained classifiers
              are stored in w_l and w_q, respectively. Using the plotc function,
              the decision boundaries can be plotted. In principle this visualization
              is only possible in 2D. For data with more than two measurements,
              the classifiers cannot be visualized.


            Listing 5.2
            PRTools code for training and plotting linear and quadratic discrimin-
            ants under assumption of normal distributions of the conditional prob-
            ability densities.


            load nutsbolts;        % Load the mechanical parts dataset
            w_l ¼ ldc(z,0,0.7);    % Train a linear classifier on z
   152   153   154   155   156   157   158   159   160   161   162