Page 376 - Fundamentals of Probability and Statistics for Engineers
P. 376
Linear Models and Linear Regression 359
2
Let us note in this example that, since x 2 x , matrix C is constrained in that
1
its elements in the third column are the squared values of their corresponding
elements in the second column. It needs to be cautioned that, for high-order
T
polynomial regression models, constraints of this type may render matrix C C
ill-conditioned and lead to matrix-inversion difficulties.
REFERENCE
Rao, C.R., 1965, Linear Statistical Inference and Its Applications, John Wiley & Sons
Inc., New York.
FURTHER READING
Some additional useful references on regression analysis are given below.
Anderson, R.L., and Bancroft, T.A., 1952, Statistical Theory in Research, McGraw-Hill,
New York.
Bendat, J.S., and Piersol, A.G., 1966, Measurement and Analysis of Random Data, John
Wiley & Sons Inc., New York.
Draper, N., and Smith, H., 1966, Applied Regression Analysis, John Wiley & Sons Inc.,
New York.
Graybill, F.A., 1961, An Introduction to Linear Statistical Models, Volume 1 . McGraw-
Hill, New York.
PROBLEMS
11.1 A special case of simple linear regression is given by
Y x E:
Determine:
^
(a) The least-square estimator B for ;
^
(b) The mean and variance of B;
(c) An unbiased estimator for 2 , the variance of Y .
11.2 In simple linear regression, show that the maximum likelihood estimators for and
are identical to their least-square estimators when Y is normally distributed.
11.3 Determine the maximum likelihood estimator for variance 2 of Y in simple linear
regression assuming that Y is normally distributed. Is it a biased estimator?
11.4 Since data quality is generally not uniform among data points, it is sometimes
desirable to estimate the regression coefficients by minimizing the sum of weighted
squared residuals; that is, ^ and ^ in simple linear regression are found by minimizing
n
X 2
w i e ;
i
i1
TLFeBOOK