Page 107 - Fundamentals of Probability and Statistics for Engineers
P. 107

90                     Fundamentals of Probability and Statistics for Engineers

             This result leads immediately to an important generalization. Consider a
           function  of X  and  Y  in  the form  g(X )h(Y ) for  which  an  expectation  exists.
           Then, if X  and Y  are independent,
                             Efg…X†h…Y†g ˆ Efg…X†gEfh…Y†g:              …4:28†

             When the correlation coefficient of two random variables vanishes, we say
           they are uncorrelated. It  should  be carefully pointed  out  that  what  we have
           shown is that independence implies zero correlation. The converse, however, is
           not true. This point is more fully discussed in what follows.
             The covariance or the correlation coefficient is of great importance in the
           analysis of two random variables. It is a measure of their linear interdependence
           in the sense that its value is a measure of accuracy with which one random
           variable can be approximated by a linear function of the other. In order to see
           this, let us consider the problem of approximating a random variable X  by a
           linear  function  of  a  second  random  variable  Y , aY ‡  b,  where a  and  b  are
           chosen so that the mean-square error e, defined by

                                                    2
                                  e ˆ Ef‰X  …aY ‡ b†Š g;                 …4:29†
           is minimized. Upon taking partial derivatives of e with respect to a and b and
           setting them to zero, straightforward calculations show that this minimum is
           attained when
                                              X
                                         a ˆ
                                               Y

           and

                                      b ˆ m X   am Y
                                                                      2
                                                               2
           Substituting these values into Equation (4.29) then gives    1     ) as the
                                                               X
           minimum mean-square error. We thus see that an exact fit in the mean-square
           sense is achieved when j jˆ 1 , and the linear approximation is the worst when
             ˆ  0. More specifically, when   ˆ‡ 1, the random variables X  and Y  are said
           to be positively perfectly correlated in the sense that the values they assume fall
           on a  straight  line with  positive slope; they are negatively perfectly correlated
           when   ˆ  1  and their values form a straight line with negative slope. These
           two extreme cases are illustrated in Figure 4.3. The value of j j  decreases as
           scatter about these lines increases.
             Let us again stress the fact that the correlation coefficient measures only the
           linear interdependence between two random variables. It is by no means a
           general measure of interdependence between X  and Y . Thus,   ˆ 0  does not
           imply independence of the random variables. In fact, Example 4.10 shows, the








                                                                            TLFeBOOK
   102   103   104   105   106   107   108   109   110   111   112