Page 295 - Statistics for Environmental Engineers
P. 295

L1592_frame_C33  Page 301  Tuesday, December 18, 2001  2:51 PM











                       The Precision of Estimates of a Nonlinear Model
                       The sum of squares function for the nonlinear model (Figure 33.3) is not symmetrical about the least
                       squares parameter estimate. As a result, the confidence interval for the parameter θ is not symmetric.
                       This is shown in Figure 33.4, where the confidence interval is 0.20 – 0.022 to 0.20 + 0.024, or [0.178,
                       0.224].
                        The asymmetry near the minimum is very modest in this example, and a symmetric linear approxi-
                       mation of the confidence interval would not be misleading. This usually is not the case when two or
                       more parameters are estimated. Nevertheless, many computer programs do report confidence intervals
                       for nonlinear models that are based on symmetric linear approximations. These intervals are useful as
                       long as one understands what they are.
                        This asymmetry is one difference between the linear and nonlinear parameter estimation problems.
                       The essential similarity, however, is that we can still define a critical sum of squares and it will still be
                       true that all parameter values giving S ≤ S c  fall within the confidence interval. Chapter 35 explains how
                       the critical sum of squares is determined from the minimum sum of squares and an estimate of the
                       experimental error variance.




                       Comments
                       The method of least squares is used in the analysis of data from planned experiments and in the analysis
                       of data from unplanned happenings. For the least squares parameter estimates to be unbiased, the residual
                       errors (e = y − η) must be random and independent with constant variance. It is the tacit assumption
                       that these requirements are satisfied for unplanned data that produce a great deal of trouble (Box, 1966).
                       Whether the data are planned or unplanned, the residual (e) includes the effect of latent variables (lurking
                       variables) which we know nothing about.
                        There are many conceptual similarities between linear least squares regression and nonlinear regres-
                       sion. In both, the parameters are estimated by minimizing the sum of squares function, which was
                       illustrated in this chapter using one-parameter models. The basic concepts extend to models with more
                       parameters.
                        For linear models, just as there is an exact solution for the parameter estimates, there is an exact solution
                       for the 100(1 – α)% confidence interval. In the case of linear models, the linear algebra used to compute
                       the parameter estimates is so efficient that the work effort is not noticeably different to estimate one or
                       ten parameters.
                        For nonlinear models, the sum of squares surface can have some interesting shapes, but the precision
                       of the estimated parameters is still evaluated by attempting to visualize the sum of squares surface,
                       preferably by making contour maps and tracing approximate joint confidence regions on this surface.
                        Evaluating the precision of parameter estimates in multiparameter models is discussed in Chapters 34
                       and 35. If there are two or more parameters, the sum of squares function defines a surface. A joint
                       confidence region for the parameters can be constructed by tracing along this surface at the critical sum
                       of squares level. If the model is linear, the joint confidence regions are still based on parabolic geometry.
                       For two parameters, a contour map of the joint confidence region will be described by ellipses. In higher
                       dimensions, it is described by ellipsoids.




                       References
                       Box, G. E. P. (1966). “The Use and Abuse of Regression,” Technometrics, 8, 625–629.
                       Chatterjee, S. and B. Price (1977). Regression Analysis by Example, New York, John Wiley.
                       Draper, N. R. and H. Smith, (1998). Applied Regression Analysis, 3rd ed., New York, John Wiley.
                       Meyers, R. H. (1986). Classical and Modern Regression with Applications, Boston, MA, Duxbury Press.
                       © 2002 By CRC Press LLC
   290   291   292   293   294   295   296   297   298   299   300