Page 223 - Probability and Statistical Inference
P. 223

200    4. Functions of Random Variables and Sampling Distribution

                                 components. This is the central idea which eventually leads to the Analyses of
                                 Variance techniques, used so widely in statistics.

                                     The Exercise 4.6.7 gives another transformation which proves that
                                              and S  are independent when the X’s are iid N(µ, σ ).
                                                 2
                                                                                       2
                                    Remark 4.4.2 Let us go back one more time to the setup considered in
                                 the Example 4.4.9 and Theorem 4.4.2. Another interesting feature can be
                                 noticed among the Helmert variables Y , ..., Y . Only the Helmert variable Y n
                                                                       n
                                                                  2
                                 functionally depends on the last observed variable X . This particular feature has
                                                                           n
                                 an important implication. Suppose that we have an additional observation X  at
                                                                                              n+1
                                 our disposal beYond X , ..., X . Then, with         =                         , the
                                                   1
                                                         n
                                 new sample variance          and its decomposition would be expressed as

                                 where                                        Here, note that Y , ..., Y n
                                                                                           2
                                 based on X , ..., X  alone remain exactly same as in (4.4.6). In other words,
                                          1
                                                n
                                 the Helmert transformation shows exactly how the sample variance is af-
                                 fected in a sequential setup when we let n increase successively.
                                    Remark 4.4.3 By extending the two-dimensional polar transformation
                                 mentioned in (4.1.2) to the n-dimension, one can supply an alternative proof
                                 of the Theorem 4.4.2. Indeed in Fisher’s writings, one often finds derivations
                                 using the n-dimensional polar transformations and the associated geometry.
                                 We may also add that we could have used one among many choices of or-
                                 thogonal matrices instead of the specific Helmert matrix A given by (4.4.7) in
                                 proving the Theorem 4.4.2. If we did that, then we would be hard pressed to
                                 claim useful interpretations like the ones given in our Remarks 4.4.1-4.4.2
                                 which guide the readers in the understanding of some of the deep-rooted
                                 ideas in statistics.
                                     Suppose that X , ..., X  are iid random variables with n ≥ 2. Then,
                                                        n
                                                  1
                                            and S  are independent ⇒ X ’s are normally distributed.
                                                2
                                                                   i
                                    Remark 4.4.4 Suppose that from the iid random samples X , ..., X , one
                                                                                       1
                                                                                             n
                                 obtains the sample mean      and the sample variance S . If it is then assumed
                                                                               2
                                 that      and S  are independently distributed, then effectively one is not assum-
                                           2
                                 ing anY less than normality of the original iid random variables. In other words,
                                                         2
                                 the independence of      and S  is a characteristic property of the normal
                                 distribution alone. This is a deep result in probability theory. For a
                                 proof of this fundamental characterization of a normal distribution
   218   219   220   221   222   223   224   225   226   227   228