Page 103 - Probability and Statistical Inference
P. 103

80    2. Expectations of Functions of Random Variables

                                          A finite mgf M (t) determines a unique infinite sequence
                                                      X
                                                  of moments of a random variable X.

                                                                                          r
                                    Remark 2.3.1 Incidentally, the sequence of moments {η  = E(X ): r = 1,
                                                                                    r
                                 2, ...} is sometimes referred to as the sequence of positive integral moments
                                 of X. The Theorem 2.3.1 provides an explicit tool to restore all the positive
                                 integral moments of X by successively differentiating its mgf and then letting
                                 t = 0 in the expression. It is interesting to note that the negative integral mo-
                                 ments of X, that is when r = –1, –2, ... in the Definition 2.3.1, are also hidden
                                 inside the same mgf. These negative moments of X can be restored by imple-
                                 menting a process of successive integration of the mgf, an operation which is
                                 viewed as the opposite of differentiation. Precise statements of the regularity
                                 conditions under which the negative moments of X can be derived with this
                                 approach are found in Cressie et al. (1981).


                                           The positive integral moments of X can be found by
                                           successively differentiating its mgf with respect to t
                                                 and letting t = 0 in the final derivative.


                                 The following result is very useful and simple. We leave its proof as an exer-
                                 cise. There will be ample opportunities to use this theorem in the sequel.
                                    Theorem 2.3.2 Suppose that a random variable X has the mgf M (t), for |
                                                                                          X
                                 t | < a with some a > 0. Let Y = cX + d be another random variable where c, d
                                 are fixed real numbers. Then, the mgf M (t) of Y is related to the mgf M (t) as
                                                                   Y
                                                                                             X
                                 follows:


                                    In the following subsections, we show the derivations of the mgf in the
                                 case of a few specific distributions. We also exhibit some immediate applica-
                                 tions of the Theorem 2.3.1.


                                 2.3.1   The Binomial Distribution
                                                                                            x
                                 Suppose that X has the Binomial (n, p) distribution with its pmf f(x) = p     (1
                                 – p)  for x = 0, 1, ..., n and 0 < p < 1, given by (1.7.2). Here, for all fixed t ∈
                                    n–x
                                 ℜ we can express M (t) as
                                                  X
   98   99   100   101   102   103   104   105   106   107   108