Page 133 -
P. 133

104           PART TWO  MANAGING SOFTWARE PROJECTS


                       Since all of these conditions fail for the values shown in Figure 4.10, the metrics data
                       are derived from a stable process and trend information can be legitimately inferred
                       from the metrics collected. Referring to Figure 4.10, it can be seen that the variabil-
                       ity of E decreases after project 10 (i.e., after an effort to improve the effectiveness of
                             r
                       reviews). By computing the mean value for the first 10 and last 10 projects, it can be
                       shown that the mean value of E for projects 11–20 shows a 29 percent improvement
                                                 r
                       over E for projects 1–10. Since the control chart indicates that the process is stable,
                            r
                       it appears that efforts to improve review effectiveness are working.


                 4.8   METRICS FOR SMALL ORGANIZATIONS

                       The vast majority of software development organizations have fewer than 20 soft-
                       ware people. It is unreasonable, and in most cases unrealistic, to expect that such
         If you’re just starting  organizations will develop comprehensive software metrics programs. However, it
         to collect software
         metrics, remember to  is reasonable to suggest that software organizations of all sizes measure and then
         keep it simple. If you  use the resultant metrics to help improve their local software process and the qual-
         bury yourself with  ity and timeliness of the products they produce. Kautz [KAU99] describes a typical
         data, your metrics  scenario that occurs when metrics programs are suggested for small software orga-
         effort will fail.
                       nizations:
                       Originally, the software developers greeted our activities with a great deal of skepticism,
                       but they eventually accepted them because we kept our measurements simple, tailored
                       them to each organization, and ensured that they produced valuable information. In the
                       end, the programs provided a foundation for taking care of customers and for planning and
                       carrying out future work.
                       What Kautz suggests is a commonsense approach to the implementation of any soft-
                       ware process related activity: keep it simple, customize to meet local needs, and be
                       sure it adds value. In the paragraphs that follow, we examine how these guidelines
          ?  How do I  relate to metrics for small shops.
            derive a set
         of “simple”      “Keep it simple” is a guideline that works reasonably well in many activities. But
         software metrics?  how do we derive a “simple” set of software metrics that still provides value, and how
                       can we be sure that these simple metrics will meet the needs of a particular software
                       organization? We begin by focusing not on measurement but rather on results. The
                       software group is polled to define a single objective that requires improvement. For
                       example, “reduce the time to evaluate and implement change requests.” A small orga-
                       nization might select the following set of easily collected measures:
                         •  Time (hours or days) elapsed from the time a request is made until evalua-
                            tion is complete, t queue .
                         •  Effort (person-hours) to perform the evaluation, W eval .
                         •  Time (hours or days) elapsed from completion of evaluation to assignment of
                            change order to personnel, t eval .
   128   129   130   131   132   133   134   135   136   137   138