Page 562 - Probability and Statistical Inference
P. 562

12




                           Large-Sample Inference



                           12.1 Introduction

                              In the previous chapters we gave different approaches of statistical infer-
                           ence. Those methods were meant to deliver predominantly exact answers,
                           whatever be the sample size n, large or small. Now, we summarize approxi-
                           mate confidence interval and test procedures which are meant to work when
                           the sample size n is large. We emphasize that these methods allow us to con-
                           struct confidence intervals with approximate confidence coefficient 1 − α or
                           construct tests with approximate level α.
                              Section 12.2 gives some useful large-sample properties of a maximum
                           likelihood estimator (MLE). In Section 12.3, we introduce large-sample con-
                           fidence interval and test procedures for (i) the mean µ of a population having
                           an unknown distribution, (ii) the success probability p in the Bernoulli distri-
                           bution, and (iii) the mean λ of a Poisson distribution. The variance stabilizing
                           transformation is introduced in Section 12.4 and we first exhibit the two
                           customary transformations         and     used respectively in the case
                           of a Bernoulli(p) and Poisson(λ) population. Section 12.4.3 includes Fisher’s
                           tanh (ρ) transformation in the context of the correlation coefficient ρ in a
                              -1
                           bivariate normal population.


                           12.2    The Maximum Likelihood Estimation
                           In this section, we provide a brief introductory discussion of some of the
                           useful large sample properties of the MLE. Consider random variables X , ...,
                                                                                        1
                           X  which are iid with a common pmf or pdf f(x; θ) where x ∈  χ  ⊆ ℜ and θ ∈
                            n
                           Θ  ⊆  ℜ. Having observed the data X = X, recall that the likelihood function is
                           given by


                           We denote it as a function of θ alone because the observed data X = (x , ..., x )
                                                                                          n
                                                                                     1
                           is held fixed.
                              We make some standing assumptions. These requirements are, in spirit,
                           similar to those used in the derivation of Cramér-Rao inequality.
                                                          539
   557   558   559   560   561   562   563   564   565   566   567