Page 78 - Classification Parameter Estimation & State Estimation An Engg Approach Using MATLAB
P. 78

PERFORMANCE OF ESTIMATORS                                     67

                estimator exploits prior knowledge about the parameter. In addition,
                the ulMMSE estimator is more apt to the evaluation criterion.
              . Of the two nonlinear estimators, the MMSE estimator outperforms
                the MAP estimator. The obvious reason is that the cost function of
                the MMSE estimator matches the evaluation criterion.
              . Of the two MMSE estimators, the nonlinear MMSE estimator outper-
                forms the linear one. Both estimators have the same optimization
                criterion, but the constraint of the ulMMSE degrades its performance.



            3.2.2  The error covariance of the unbiased linear MMSE
                   estimator


            We now return to the case of having linear sensors, z ¼ Hx þ v,as
            discussed in Section 3.1.5. The unbiased linear MMSE estimator
            appeared to be (see eq. (3.33)):


                                                                  T
              ^ x x ulMMSE ðzÞ¼ m þ Kðz   Hm Þ with  K ¼ C x H T    HC x H þ C v    1
                           x           x
            where C v and C x are the covariance matrices of v and x. m is the (prior)
                                                                x
                                                   x
            expectation vector of x. As said before, the ^ x ulMMSE (:) is unbiased.
              Due to the unbiasedness of ^ x ulMMSE (:), the mean of the estimation
                                        x
            error e ¼ ^ x ulMMSE (:)   x is zero. The error covariance matrix, C e ,of e
                     x
            expresses the uncertainty that remains after having processed the meas-
            urements. Therefore, C e is identical to the covariance matrix associated
            with the posterior probability density. It is given by (3.20):

                                                     1
                                                 T
                             C e ¼ C xjz ¼ C  1  þ H C H    1          ð3:44Þ

                                           x        v
            The inverse of a covariance matrix is called an information matrix. For
            instance, C  1  is a measure of the information provided by the estimate
                      e
            ^ x x ulMMSE (:). If the norm of C  1  is large, then the norm of C e must be small
                                    e
                                          x
            implying that the uncertainty in ^ x ulMMSE (:) is small as well. Equation
            (3.44) shows that C  1  is made up of two terms. The term C  1  represents
                                                                 x
                             e
            the prior information provided by m . The matrix C  1  represents the
                                              x
                                                             v
            information that is given by z about the vector H x . Therefore, the matrix
                 1
              T
            H C H represents the information about x provided by z. The two
                v
            sources of information add up. So, the information about x provided by
                                         1
                                     T
            ^ x x ulMMSE (:)is C  1  ¼ C  1  þ H C H.
                         e     x        v
   73   74   75   76   77   78   79   80   81   82   83