Page 371 - Fundamentals of Probability and Statistics for Engineers
P. 371

354                    Fundamentals of Probability and Statistics for Engineers
                ^
                   0 31 <  0 61, we accept H
           Since   ˆ :     :             0 . That is, we conclude that the data do not
                                            f g
           indicate a linear relationship between E  Y   and x; the probability that we are
           wrong in accepting H 0 is 0.05.
             In closing, let us remark that we are often called on to perform tests of
           simultaneous  hypotheses.  For  example,  one  may  wish  to  test  H 0 :   ˆ  0 and
             ˆ 1              0o      1 or both. Such tests involve both estimators
                              0 or   6ˆ
              1 against  H 1 ::   6ˆ
                ^
           ^
           A  and B  and hence require their joint distribution. This is also often the case in
           multiple linear regression, to be discussed in the next section. Such tests
           customarily involve F-distributed test statistics, and we will not pursue them
           here. A general treatment of simultaneous hypotheses testing can be found in
           Rao (1965), for example.

           11.2 MULTIPLE LINEAR REGRESSION

           The vector–matrix approach proposed in the preceding section provides a smooth
           transition from simple linear regression to linear regression involving more than
           one independent variable. In multiple linear regression, the model takes the form

                           EfYgˆ   0 ‡   1 x 1 ‡   2 x 2 ‡     ‡   m x m :  …11:45†


           Again, we assume that the variance of Y  is   2  and is independent of x 1 , x 2 , .. . , and
           x m . As in simple linear regression, we are interested in estimating (m ‡  1) regres-
           sion coefficients     1 ,. . ., and   m , obtaining certain interval estimates, and testing
                         0 ,
           hypotheses about these parameters on the basis of a sample of Y  values with their
           associated values of (x 1 , x 2 , . . ., x m ). Let us note that our sample of size n in this
           case takes  the form  of  arrays (x 11 , x 21 , .. ., x m1 , Y 1 ),  (x 12 , x 22 , .. . , x m2 , Y 2 ), .. . ,
           (x 1n , x 2n , .. ., x mn , Y n ). For  each  set  of values x ki , k ˆ  1, 2, . . . , m, of x i , Y i  is an
           independent observation from population Y  defined by

                              Y ˆ   0 ‡   1 x 1 ‡      ‡   m x m ‡ E:  …11:46†

           As before, E is the random error, with mean 0 and variance   2 .



           11.2.1  LEAST  SQUARES  METHOD  OF  ESTIMATION

           To estimate the regression coefficients, the method of least squares will again be
                                                                   ˆ
           employed. Given observed sample-value sets (x 1i , x 2i ,..., x mi , y i ), i   1, 2,..., n,
           the system of observed regression equations in this case takes the form

                   y i ˆ   0 ‡   1 x 1i ‡     ‡   m x mi ‡ e i ;  i ˆ 1; 2; ... ; n:  …11:47†








                                                                            TLFeBOOK
   366   367   368   369   370   371   372   373   374   375   376