Page 219 - Materials Chemistry, Second Edition
P. 219

10.5 Challenges in the application of MADM for LCSA  217
            top performers (Kalbar et al., 2017a). Finally, indicator uncertainty involves selection of indi-
            cators in a study that are irrelevant or incomplete (Heijungs and Huijbregts, 2004). Addition-
            ally, uncertainties could also be associated with the framing of the problem, selection of
            method for aggregation, and levels of selected attributes (Scholten et al., 2015). There is no
            clearly documented way to fully remove the uncertainty as the indicator selection is a sub-
            jective process and depends on the domain(s) knowledge of involved researcher(s).
              Tzeng and Huang (2011) suggest that before proceeding with MADM analysis, data must
            be put into a histogram and their distribution with standard deviation should be checked.
            If the distribution is nonnormal and standard deviation is significant, then sensitivity analysis
            is mandatory during MADM analysis. Sensitivity analysis must precede uncertainty analysis
            (Kleijnen, 1994).



            10.5.7 Interpretation of the results
              Results from an analysis in MCDA typically involves ranking of alternatives concerning a
            specific set of attributes. The results are obtained in the form of an aggregated index and need
            further interpretation for deriving correct decision support. For example, Figueira et al. (2005)
            and Munda (2005) conducted studies on four different cities using a distance-based method
            such as TOPSIS. The studies suggested that results from MADM analysis cannot blindly be
            relied upon. Even if different types of aggregation schemes are used and still the results are
            not robust, then reconsideration of areas related to uncertainties mentioned in Section 10.5.6
            must be completed. Therefore, it is recommended that in MADM analysis, robustness of the
            decision process is more critical compared to the final solution.
              Similarly, LCSA also has a major challenge in its interpretation of results, where the
            integration of three different tools (LCA, LCC, and SLCA) is required to produce a collective
            result (Hannouf and Assefa, 2017). Zhang and Haapala (2015) suggested the use of MADM
            approaches as an efficient way of developing frameworks to integrate the tools and interpret
            combined results. Zampori et al. (2016) provided general guidelines to interpret results,
            in which identification of significant issues can be achieved by the use of MADM methods.
            Additionally, the study also recommended conducting thorough checks like completeness of
            inventory data, sensitivity analysis to assess reliability of results, and consistency check of
            methods and assumptions. There has not been sufficient work on the interpretation of results
            in MADM integrated with LCSA. One of the efforts for interpretation of MADM results is
            using radar diagrams, as demonstrated by Kalbar et al. (2012).
              The results of ranking in LCSA based on MADM are an aggregated score, i.e., a single value
            for each indicator. Considering all the methodological choices, data uncertainties, effects
            of weights, and MADM methods limitations (e.g., rank reversal), unless the topmost ranked
            alternatives have a significant difference in the score from the second most alternative, that
            alternative cannot be concluded as the best performing one. For example, Kalbar et al. (2016)
            implemented an approach wherein such cases, the top two to three alternatives having almost
            equal scores will be concluded as most-preferred alternatives.
              In a reallife situation, as best practice, it is recommended to apply multiple MADM
            methods for the given problem with different weighting schemes reflecting the priorities
            of stakeholders. The alternatives that are frequently ranked as topmost can be concluded
            as the most preferred ones.
   214   215   216   217   218   219   220   221   222   223   224