Page 203 - Software and Systems Requirements Engineering in Practice
P. 203
169
a
l
Q
u
i
A
t
y
:
a
p
s
h
t
5
e
r
i
r
u
e
q
n
t
e
e
m
i
b
r
t
t
R
e
u
t
C C h a p t e r 5 : Q u a l i t y A t t r i b u t e R e q u i r e m e n t s 169
All of these business decisions require input from technical staff
to determine the impact of such requirements and to inform the
technical staff of the importance of these requirements. Too often in
practice, however, there are some differences between what an
organization wants and what its technical team delivers. For example,
a business unit wants to create a high-performing infotainment
system for a luxury line of cars in a compressed time-to-market. The
technical team is forced to distribute the development of parts of this
system across geographically distributed teams to achieve the
compressed schedule via parallel development efforts. When the
components developed by the teams are integrated together, they
exceed the memory and performance budgets. While individual
components are carefully crafted, not enough attention has been
given to the overall system goal of achieving high performance within
the given resource constraints. The result is that the business unit is
not able to produce the desired product. In this example, the difference
between what was desired and what was delivered cost the company
hundreds of millions of dollars spent developing the system and
billions of dollars in potential lost revenue.
The Notion of Quality
Quality attribute requirements are important both in terms of
customer satisfaction and in driving the design of a software system.
Yet asserting the importance of quality attribute requirements is only
an opening for many other questions [Ozkaya et al. 2008].
There is no shortage of taxonomies and definitions of quality
attributes. The best known is probably ISO 9126, which defines
22 different quality attributes and subattributes (which we refer to as
quality attribute concerns) [Glinz 2008]. There are questions concerning
the extent to which practitioners use the terminology defined in ISO
9126, and which quality attributes defined in ISO 9126 cover the qualities
about which practitioners are most concerned. We have observed
during architectural evaluations that practitioners sometimes do not
use consistent terminology and have concerns that are not covered in
relevant taxonomies. Our approach to resolving terminological
ambiguities is to use quality attribute scenarios as a means of capturing
the precise concerns of the stakeholders. This allows us to supplement
the terms used by various stakeholders with a specification that is
independent of quality attribute definitions and taxonomies.
For example, ISO 9126 does not have an explicit performance
category; the concerns are listed under efficiency and only two
concerns are listed, which are time behavior and resource utilization.
Another commonly used taxonomy is the FURPS+ scheme, which
refers to functionality, usability, reliability, performance, and
supportability. FURPS+ lists recovery time, response time, shutdown
time, startup time, and throughput as concerns under the performance
category. All of these concerns appear in our data, along with some