Page 196 - Elements of Distribution Theory
P. 196

P1: JZP
            052184472Xc06  CUNY148/Severini  May 24, 2005  2:41





                            182                         Stochastic Processes

                            Since
                                        ∞                             ∞
                                                                                   2
                                           |α j+1 − α j ||α j+h+1 − α j+h |≤  (α j+1 − α j ) < ∞,
                                       j=−∞                          j=−∞
                            we may write
                                        ∞                ∞             ∞              ∞

                                R(h) =     α j+1 α j+h+1 −  α j α j+h+1 −  α j+1 α j+h +  α j α j+h
                                       j=−∞            j=−∞           j=−∞           j=−∞
                                         ∞            ∞              ∞

                                     = 2     α j α j+h −  α j α j+h+1 −  α j+1 α j+h
                                        j=−∞         j=−∞           j=−∞
                                     = R Z (|h|) − R Z (|h + 1|) − R Z (|h − 1|)
                            where R Z denotes the autocovariance function of the process {Z t : t ∈ Z}. This is in agree-
                            ment with Example 6.4.



                                                     6.4 Markov Processes
                            If X 0 , X 1 ,... are independent, real-valued random variables, then
                                              Pr(X n+1 ≤ x|X 1 ,..., X n ) = Pr(X n+1 ≤ x)

                            with probability 1. If X 0 , X 1 ,... is a Markov process, this property is generalized to allow
                            Pr(X n+1 ≤ x|X 1 ,..., X n )to depend on X n :

                                             Pr(X n+1 ≤ x|X 1 ,..., X n ) = Pr(X n+1 ≤ x|X n ).
                            That is, the conditional distribution of X n+1 given X 1 ,..., X n does not depend on X 1 ,...,
                            X n−1 .


                            Example 6.8 (First-order autoregressive process). Consider the first-order autoregressive
                            process considered in Example 6.2:
                                                          1

                                                       √   2 Z 0   if t = 0
                                                        (1−ρ )
                                                 X t =
                                                       ρX t−1 + Z t  if t = 1, 2,...
                            where Z 0 , Z 1 , Z 2 ... is a sequence of independent, identically distributed random variables,
                                                                           2
                            each with a normal distribution with mean 0 and variance σ and −1 <ρ < 1.
                              Note that we may write
                                                   ρ t         t−1
                                           X t = √       Z 0 + ρ  Z 1 +· · · + ρZ t−1 + Z t ;
                                                       2
                                                  (1 − ρ )
                            thus, for each t = 0, 1, 2,..., Z t+1 and (X 0 ,..., X t ) are independent. Using the fact that
                                                 X t+1 = ρX t + Z t+1 , t = 1, 2,...,

                            it follows that, for each s ∈ R,
                                    E{exp(isX t+1 )|X 0 ,..., X t }= E{exp(isρX t )exp(isZ t+1 )|X 0 ,..., X t }
                                                          = exp(isρX t )E{exp(isZ t+1 )}.
                            Hence, the first-order autoregressive process is a Markov process.
   191   192   193   194   195   196   197   198   199   200   201