Page 284 - Mechanical Engineers' Handbook (Volume 2)
P. 284
4 Systems Engineering Methodology and Methods 275
There are three uses to which models may normally be put. Model categories corre-
sponding to these three uses are descriptive models, predictive or forecasting models, and
policy or planning models. Representation and replication of important features of a given
problem are the objects of a descriptive model. Good descriptive models are of considerable
value in that they reveal much about the substance of complex issues and how, typically in
a retrospective sense, change over time has occurred. One of the primary purposes behind
constructing a descriptive model is to learn about the past. Often the past will be a good
guide to the future.
In building a predictive or forecasting model, we must be especially concerned with
determination of proper cause-and-effect relationships. If the future is to be predicted with
integrity, we must have a method with which to determine exogenous variables, or input
variables that result from external causes, accurately. Also, the model structure and param-
eters within the structure must be valid for the model to be valid. Often, it will not be
possible to predict accurately all exogenous variables; in that case, conditional predictions
can be made from particular assumed values of unknown exogenous variables.
The future is inherently uncertain. Consequently, predictive or forecasting models are
often used to generate a variety of future scenarios, each a conditional prediction of the
future based on some conditioning assumptions. In other words, we develop an ‘‘if–then’’
model.
Policy or planning models are much more than predictive or forecasting models, al-
though any policy or planning model is also a predictive or forecasting model. The outcome
from a policy or planning model must be evaluated in terms of a value system. Policy or
planning efforts must not only predict outcomes from implementing alternative policies but
also present these outcomes in terms of the value system that is in a form useful and suitable
for alternative ranking, evaluation, and decision making. Thus, a policy model must contain
some provision for impact interpretation.
Model usefulness cannot be determined by objective truth criteria alone. Well-defined
and well-stated functions and purposes for the simulation model are needed to determine
simulation model usefulness. Fully objective criteria for model validity do not typically exist.
Development of a general-purpose, context-free simulation model appears unlikely; the task
is simply far too complicated. We must build models for specific purposes, and thus the
question of model validity is context dependent.
Model credibility depends on the interaction between the model and model user. One
of the major potential difficulties is that of building a model that reflects the outlook of the
modeler. This activity is proscribed in effective systems engineering practice, since the pur-
pose of a model is to describe systematically the ‘‘view’’ of a situation held by the client,
not that held by the analyst.
A great variety of approaches have been designed and used for the forecasting and
assessment that are the primary goals of systems analysis. There are basically two classes
of methods that we describe here: expert-opinion methods and modeling and/or simulation
methods.
Expert-opinion methods are based on the assumption that knowledgeable people will be
capable of saying sensible things about the impacts of alternative policies on the system, as
a result of their experience with or insight into the issue or problem area. These methods
are generally useful, particularly when there are no established theories or data concerning
system operation, precluding the use of more precise analytical tools. Among the most prom-
inent expert-opinion-based forecasting methods are surveys and Delphi. There are, of course,
many other methods of asking experts for their opinion—for example, hearings, meetings,
and conferences. A particular problem with such methods is that cognitive bias and value
incoherence are widespread, often resulting in inconsistent and self-contradictory results.