Page 296 - Materials Chemistry, Second Edition
P. 296

282                                               R.K. Rosenbaum et al.

              Simplicity is often perceived as a desirable quality of a model making it easy to
            understand and less data demanding. Complexity on the other hand is frequently
            perceived as cumbersome, non-transparent and data intensive. However, rejecting
            complexity as such, without regarding its relevance and influence on the decision at
            hand, will of course be simpler and also lead to a decision, but it may not be a
            decision fulfilling the LCA objective of choosing an environmentally preferable
            option. In other words, it may be a more precise but less accurate and thus a
            potentially misleading decision. Given the inherent (i.e. unavoidable) complexity of
            environmental processes and our still limited knowledge of them, the principle of
            “It is better to be vaguely right than exactly wrong” (Read 1920) is a much cited
            and useful angle when discussing uncertainties in LCA, thereby also acknowl-
            edging that we should never design our models more complex than necessary to
            avoid “paralysis by analysis” potentially leading to no operational model at all and,
            hence, to no decision (support).



            11.2.3 Representing Uncertainty


            The probabilistic nature of uncertainty of the studied process or object is concep-
            tualised by a probability distribution. The probability distribution of a continuous
            variable is described by a distribution function, usually the probability density
            function (PDF—not to be confused with the abbreviation PDF for Potentially
            Disappeared Fraction of species as used in Chap. 10). In practice, the PDF of an
            input parameter x is estimated by the values x i measured over a sample, ranging
            from a minimum to a maximum value. Hence, the probability is approximated by
            the relative frequency when enough values are sampled. For example, when
            measuring the body weight of individuals in a human population of several thou-
            sand people, we will always find a range of values with a minimum value given by
            the lightest and a maximum value given by the heaviest individual(s) among those
            measured. Drawing the full range of measured values on the x axis and how often
            each of these values occurs (=their relative frequency) on the y axis results in a
            distribution function (a PDF) as illustrated in Fig. 11.6.
              The shape of this function varies substantially depending on the frequency of the
            values of a variable. Many shape patterns have been clearly defined and termed,
            distinguishing continuous distributions such as normal, log-normal, or beta, and
            discrete ones such as binomial, Poisson, or hypergeometric, the latter being char-
            acterised by a probability mass function (PMF). When representing uncertainties,
            these names are used to describe the type of distribution and are an essential
            element when addressing the uncertainty of a (measured or estimated) parameter or
            the model output. Various methods exist to fit a continuous or a discrete distribution
            over a set of values.
              Generally, important measures to describe uncertainties of an input parameter
            x or the model output are the standard deviation for the spread of a distribution, and
            for the central tendency of a distribution the arithmetic mean (or average), the
   291   292   293   294   295   296   297   298   299   300   301