Page 94 - Classification Parameter Estimation & State Estimation An Engg Approach Using MATLAB
P. 94

A GENERAL FRAMEWORK FOR ONLINE ESTIMATION                     83

            time-continuous to time-discrete models are described in many text-
            books, for instance, on control engineering.



            4.1.1  Models

            We assume that the continuous time t is equidistantly sampled with
            period  . The discrete time index is denoted by an integer variable i.
            Hence, the moments of sampling are t i ¼ i . Furthermore, we assume
            that the estimation problem starts at t ¼ 0. Thus, i is a non-negative
            integer denoting the discrete time.

            The state space model
            The state at time i is denoted by x(i) 2 X where X is the state space. For
            discrete states, X ¼ 
 ¼f! 1 , ... , ! K g where ! k is the k-th symbol (label,
            or class) out of K possible classes. For real-valued vectors with dimen-
                                 M
            sion M, we have X ¼ R .
              Suppose for a moment that we have observed the state of a process
            during its whole history, i.e. from the beginning of time up to the
            present. In other words, the sequence of states x(0), x(1), .. . , x(i) are
            observed and as such fully known. i denotes the present time. In add-
            ition, suppose that – using this sequence – we want to estimate (predict)
            the next state x(i þ 1). Assuming that the states can be modelled as
            random variables, we need to evaluate the conditional probability dens-
            ity 1  p(x(i þ 1)jx(0), x(1), ... , x(i)). Once this probability density is
            known, the application of the theory in Chapters 2 and 3 will provide
            the optimal estimate of x(i þ 1). For instance, if X is a real-valued vector
            space, the Bayes estimator from Chapter 3 provides the best prediction
            of the next state (the density p(x(i þ 1)jx(0), x(1), ... , x(i)) must be used
            instead of the posterior density).
              Unfortunately, the evaluation of p(x(i þ 1)jx(0), x(1), .. . , x(i)) is a
            nasty task because it is not clear how to establish such a density in
            real-world problems. Things become much easier if we succeed to define
            the state such that the so-called Markov condition applies:

                      pðxði þ 1Þjxð0Þ; xð1Þ; .. . ; xðiÞÞ ¼ pðxði þ 1ÞjxðiÞÞ  ð4:1Þ




            1
             For the finite-state case, the probability densities transform into probabilities, and appropriate
            summations replace the integrals.
   89   90   91   92   93   94   95   96   97   98   99