Page 289 - Materials Chemistry, Second Edition
P. 289
11 Uncertainty Management and Sensitivity Analysis 275
11.2.1 Defining Uncertainty, Variability and Sensitivity
The term uncertainty is used with a fairly large variation in its definition, including
or excluding (somewhat) adjacent concepts like variability and sensitivity. It is
therefore difficult if not impossible to give a universally valid and accepted defi-
nition of uncertainty. For the sake of defining a common understanding within the
scope of this book, we use the definition of uncertainty as comprising everything we
do not know, expressed as the probability or confidence for a certain event to
occur. More precisely, the “unknown” includes both random and systematic errors
(of estimating, measuring or collecting data), mistakes, and epistemological (or
epistemic) uncertainty (i.e. lack of scientific knowledge and consequent misinter-
pretations). To put it a bit bluntly, uncertainty in principle describes the degree to
which we may be off from the truth. In reality it is of course impossible for us to
know that, otherwise we would not have to face uncertainty since we would know
the truth (and we will avoid attempting to define what “truth” itself means).
Therefore, in practice we define reference points that we assume to represent truth
or at least to be close to it. A typical example for such a reference point would be a
measurement. If we trust the measuring method and protocol we trust that a
measurement represents a sort of truth at a specific point in space and time and the
difference between a modelled estimate and a corresponding measured value is then
used as an indicator for uncertainty. Ciroth et al. (2004) discuss and nicely illustrate
this discrepancy between measured and true value and what uncertainty represents
in that respect.
It is then important to keep in mind that the measured value inevitably comes
with its own uncertainty due to possible measurement errors (and mistakes) and due
to the uncertainty of how suitable the measurement method and how representative
the sampling was regarding the actual “truth”. Uncertainty can thus be quantified
and reduced by knowing more, which usually requires us to invest more resources
in order to gain more knowledge (e.g. by performing additional measurements or
collecting more data and refining the model). However, no matter how many re-
sources we have available, we can never be certain that we have eliminated (or at
least minimised) uncertainty.
In order to define variability, let’s take the example of body weight distributions
in a human population. Many observations we can make will always have more
than one value, as soon as we measure more than one sample (i.e. a sub-set of data
points from a population of measured data), human body weight being an intuitive
example. We are thus faced with a natural variability that simply represents the
variety or spread in the data that we will always observe. With enough resources at
hand that allow us to take every possible sample, we can perfectly well measure and
quantify this variability, but we can never reduce it. In the context of LCA, we are
typically faced with three different types of variability: (1) temporal variability (e.g.
seasonal changes in temperature), (2) spatial or geographical variability (e.g. pop-
ulation density in different regions), and (3) inter-individual variability of humans,
animals, other species (e.g. differences in diets) or technologies.