Page 285 - Probability and Statistical Inference
P. 285

262    5. Concepts of Stochastic Convergence

                                 definition of g’(θ) itself. Recall that for some fixed x, one has:
                                                     . We again apply Slutsky’s Theorem, part (ii), in con-
                                 junction with (5.3.6) to complete the proof. !
                                    Remark 5.3.1 Suppose that the X’s are as in the CLT. Then we immedi-
                                 ately conclude that                        as n → ∞, once we plug in
                                       2
                                 g(x) = x , x ∈ ℜ and apply the Mann-Wald Theorem. We had shown this result
                                 in a different way in the Example 5.3.7. Now, we can also claim that
                                                                as n → ∞, by plugging in g(x) = x , x ∈
                                                                                             q
                                 ℜ, for any fixed non-zero real number q (≠ 1). We leave verification of this
                                 result as an exercise. Of course, we tacitly assume that q is chosen in such a
                                 way that both    and µ  remain well-defined. Note that for negative values of
                                                     q
                                 q, we need to assume that the probability of    being exactly zero is in fact
                                 zero and that µ is also non-zero.
                                    Example 5.3.8 Let X , ..., X  be iid Poisson (λ) with λ > 0. Observe that
                                                      1     n
                                                2
                                 in this case µ = σ  = λ and then, by the CLT,            N(0, 1) as
                                 n → ∞. Then, by using the Remark 5.3.1 and Mann-Wald Theorem, we
                                 immediately see, for example, that                   as n → ∞. !
                                    Now we move towards an appropriate CLT for the standardized sample
                                 variance. This result again utilizes a nice blend of the CLT (Theorem 5.3.4)
                                 and Slutsky’s Theorem.
                                    Theorem 5.3.6 (Central Limit Theorem for the Sample Variance) Let
                                                                                   2
                                 X , ..., X  be iid random variables with mean µ, variance σ (> 0), µ  = E{(X 1
                                                                                          4
                                        n
                                  1
                                                                                         4
                                     4
                                 –  µ) },  and we assume that 0 <  µ  <  ∞  as well as  µ  >  σ .  Denote
                                                                  4                 4
                                                                                   for n ≥ 2. Then,
                                 we have:
                                    Proof Let us first work with               . We denote Y  = (X  –
                                                                                           i    i
                                 µ) , i = 1, ..., n,          , and write
                                   2


                                 From (5.3.7), observe that
   280   281   282   283   284   285   286   287   288   289   290