Page 111 - Probability and Statistical Inference
P. 111

88    2. Expectations of Functions of Random Variables

                                 claim that X and Y have different probability distributions because their two
                                 pdf’s are obviously different when c ≠ 0. In the Figure 2.4.1, these two pdf’s
                                 have been plotted when c = 1/2. Here all the moments of X are finite, but in the
                                 Exercise 2.4.5 one would verify that the mgf for the random variable X is not
                                 finite. In another related Exercise 2.4.6, examples of other pairs of random
                                 variables with analogous characteristics can be found. !

                                          Finite moments alone may not determine a distribution
                                            uniquely. The Example 2.4.4 highlights this point.

                                    Depending on the particular situation, however, an infinite sequence of all
                                 finite moments may or may not characterize a probability distribution uniquely.
                                 Any elaborate discussion of such issues would be out of scope of this book. The
                                 readers may additionally consult the Section 3, Chapter 7 in Feller (1971) on
                                 “Moment Problems.” Chung (1974) also includes relevant details.


                                 2.5 The Probability Generating Function

                                 We have seen that a mgf generates the moments. Analogously, there is another
                                 function which generates the probabilities. Suppose that X is a non-negative
                                 random variable. A probability generating function (pgf) of X is defined by



                                 The explicit form of a pgf is often found when the random variable X is
                                 integer-valued. From this point onward, let us include only non-negative inte-
                                 ger-valued random variables in this discussion.
                                    Why this function P (t) is called a pgf should become clear once we write
                                 it out fully as follows:  X



                                 where p  = P(X = i) for i = 1, 2, ... . We see immediately that the coefficient
                                        i
                                 of t  is p , the probability that X = i for i = 1, 2, ... .
                                    i
                                        i




                                 In the light of the Theorem 2.3.1 one can justify the following result:
   106   107   108   109   110   111   112   113   114   115   116