Page 35 - Introduction to Statistical Pattern Recognition
P. 35

2  Random Vectors and their Properties                         17







                    where SZ = [o, . . .  o,]~ and O, is the ith frequency component.
                         (5) Linear- transformations: Under any nonsingular linear transformation,
                    the distance function  of  (2.22) keeps its quadratic form and  also does not  lose
                    its positive definiteness.  Therefore, after a nonsingular linear transformation, a
                    normal distribution becomes another normal  distribution with different  parame-
                    ters.
                         Also,  it  is  always  possible  to  find  a  nonsingular  linear  transformation
                    which makes the new covariance matrix diagonal.  Since a diagonal covariance
                    matrix means uncorrelated variables  (independent variables  for a normal  distri-
                    bution),  we  can  always find  for a  normal  distribution  a  set of  axes such that
                    random  variables  are  independent  in  the  new  coordinate  system.  These  sub-
                    jects will be discussed in detail in a later section.
                         (6) Physical jusfification: The  assumption  of  normality  is  a  reasonable
                    approximation for many real data sets.  This is, in particular, true for processes
                    where  random  variables  are  sums  of  many  variables  and  the  central  limit
                    theorem can  be  applied.  However, normality  should  not  be  assumed without
                    good justification.  More often than not this  leads to meaningless conclusions.

                    2.2  Estimation of Parameters

                    Sample Estimates

                         Although  the  expected  vector  and  autocorrelation  matrix  are  important
                    parameters  for characterizing  a distribution, they  are unknown  in  practice  and
                    should be estimated from a set of  available samples.  This is normally  done by
                    using  the  sample  estimation  technique  [6,7].  In  this  section, we  will  discuss
                    the technique  in  a generalized  form first,  and  later  treat  the  estimations of  the
                    expected vector and autocorrelation  matrix  as the special cases.

                         Sample estimates: Let y  be a function of  x, , . . . , x,,  as
                                            s=f'cx,,. . ., x,,)                 (2.25)

                    with  the expcctcd value rn, and variance  0::
   30   31   32   33   34   35   36   37   38   39   40