Page 268 - Introduction to Statistical Pattern Recognition
P. 268

250                        Introduction to Statistical Pattern Recognition



                       Computer Projects

                       1.   Repeat Experiment  1.

                       2.   Repeat Experiment 4.  Also, estimate the asymptotic error for each n by
                            using the line fitting procedure.

                       3.   Repeat Experiment 5.  Also, estimate the asymptotic error for each n by
                            using the line fitting procedure.
                       4.   Repeat Experiment 6.

                       5.   Repeat Experiment 9.

                       6.   Repeat Experiment  10.


                       Problems


                       1.   The Fisher criterion, f = (m2-m l)2/(o:+o:), measures  the  class  separa-
                            bility between two one-dimensional distributions.  Compute the bias and
                            variance of  f  when  these parameters are  estimated by  using  N  samples
                            from N-,(O, 1) for o1 and N samples from N,( 1.4) for 02.
                                                                                       ,.
                       2.   Let f (m) be  a function of  m, where  m is  the  expected value of  a  one-
                            dimensional normal distribution and is estimated by  the sample mean m
                                                      ..
                            using  N  samples.  Expand  f (m) around f (m) up  to  the  fourth  order
                            term, and confirm that E { 0'3' I  = 0 and E { 0'4' )-lIN2.
                                                                     ,.
                                                         A     A
                       3.   Compute the bias and variance of  p (not pI and p2 separately) for nor-
                            mal distributions.

                                                                     01
                       4.   In order for a  linear classifier h (X) = VTX + v0  ><  0 to  be  optimum by
                                                                     02
                            minimizing the error of  (5.37), prove that V and  \io must satisfy
   263   264   265   266   267   268   269   270   271   272   273