Page 204 -
P. 204

DOMAIN-SPECIFIC & IMPLEMENTATION-INDEPENDENT SOFTWARE ARCHITECTURES     189
                    than 5, it may significantly alter the value of another metric, such as “DRAC Size,” that it is no
                    longer in an acceptable range. Thus, such tight constraints may mandate continued refinement
                    without ever being able to achieve some equilibrium state for the architecture.
                    RELATED WORK


                    This section highlights related work in software architecture evaluation methods and object-oriented
                    analysis and design as related to software architecture derivation and evaluation.

                    Software Architecture Evaluation

                    Approaches for software architecture evaluation cover a broad spectrum. On the one hand, evalua-
                    tion approaches can be classified based on the qualities they emphasize—from methods that focus
                    on particular qualities to broad-brush methods that provide a general evaluation framework. On
                    the other hand, approaches can be categorized based on the evaluation technique employed, from
                    static methods such as scenario analysis or metrics-based assessment to dynamic methods such
                    as simulation. While dynamic methods often provide an accurate measure of system performance
                    and other runtime qualities, such analysis is not relevant to the RARE DRA derivation and evalu-
                    ation process and will not be addressed in detail. RARE evaluation focuses on the measuring
                    characteristics of the architectural structure that results from defining DRACs and allocating
                    DM functionality and data to those DRACs. Thus, static approaches are more applicable to the
                    RARE process.
                      Much of the software architecture research has progressed as a result of individual communities
                    focusing on specific software system quality issues such as maintainability (Bengtsson and Bosch,
                    1999; Briand, Morasca, and Basili, 1993), comprehensibility (Briand, Morasca, and Basili, 1993),
                    reliability (Abd-Allah, 1997; Wang, Wu, and Chen, 1999), performance (Abd-Allah, 1997), inte-
                    grability (Abowd et al., 1993), reusability (Zhao, 2000), and flexibility (Lassing, Rijsenbrij, and
                    Van Vliet, 1999). RARE DRA derivation is concerned with structure, primarily the identification
                    of classes and the allocation of data and functionality. Consequently, these specific evaluation
                    methods must be applicable during the allocation activity to be relevant to RARE DRA evalua-
                    tion. For example, the recommendation by Briand, Morasca, and Basili (1993) for low average
                    coupling and high average cohesion to encourage maintainability and comprehensibility can be
                    directly applied to DRA data and service allocation—in fact, coupling/cohesion can be consid-
                    ered the strongest influence in DRA derivation. However, the reliability approaches suggested by
                    Abd-Allah (1997) and Wang, Wu, and Chen (1999) cannot be applied in whole since they rely
                    heavily on the semantics of architectural styles, some of which imply implementation properties
                    not addressed in the systems engineering process activities SEPA DRA.
                      While it is important for research efforts to focus on methods for evaluating individual quali-
                    ties, real-world software systems require a balance of different software qualities (Bengtsson and
                    Bosch, 1999; Bosch and Molin, 1999). Two approaches have been proposed for static evaluation
                    of architectures that are not specific to a particular quality: scenario-based methods and metrics-
                    based methods.
                      The most prevalent overall architecture evaluation approaches are rooted in scenario-based
                    methods such as the Software Architecture Analysis Method (SAAM) (Kazman et al., 1994) and
                    the Architectural Tradeoff Analysis Method (ATAM) (Kazman et al., 1998; Lougee, 2005). The
                    success of scenario-based approaches has led to the development of a number of related evalua-
                    tion methods (Asundi, Kazman, and Klein, 2001; Bengtsson, 2002; Dobrica and Niemela, 2002;
   199   200   201   202   203   204   205   206   207   208   209