Page 203 -
P. 203

Gather Metrics
                          Metrics are statistics gathered over the course of the software project in order to identify
                          accomplishments and areas for improvement. With a good metrics program, it’s possible
                          to compare projects to one another, regardless of size or scope. The ability to make these
                          comparisons will help a project manager make consistent judgements and set standards
                          for project teams across multiple projects.

                          It is not difficult to gather most simple metrics. The information needed to calculate
                          them is available in the defect tracking system and the project schedule. (In Chapter 4,
                          valuable earned value metrics were gathered and discussed.) In addition to earned
                          value, there are several other metrics that can be useful to a project manager. The
                          project manager should work with the senior management of the organization to deter-
                          mine what is worth measuring.
                          All metrics should be taken on an organization-wide level. It is important that the num-
                          bers that are gathered are used for improving the organization, and not for rewarding or
                          penalizing individual software engineers. It is tempting, for example, to make part of an
                          annual bonus dependent on the team reducing certain defect measurements. In practice,
                          that is an effective way to make sure that defects do not get reported or tracked properly.
                          Here is a list of metrics commonly used in software projects:

                          • The Defects per Project Phase metric provides a general comparison of defects found in
                            each project phase. This metric requires the person entering the defect in the defect-
                            tracking database to enter the phase at which the defect was found. It also requires that
                            defects from document reviews be included, and that those defects be prioritized in the
                            same manner as all other defects. (For example, defects found during an SRS inspection
                            would be classified as found during the requirements phase.) This metric is usually
                            shown as a bar graph, with the project phases on the X-axis, the number of defects on
                            the Y-axis, and the data points being the number of defects found per phase (one sub-
                            bar per priority).
                          • The Mean Defect Discovery Rate metric weighs the number of defects found by the effort
                            on a day-by-day basis over the course of the project. This is a standard and useful tool to
                            determine whether a specific project is going according to a projected defect discovery

                            rate (based on previous projects and industry averages). This rate should slow down as
                            the project progresses, so that far more defects are found at the beginning of software
                            testing than at the end. If this metric remains constant or, even worse, is increasing over
                            the course of several test iterations, it could mean that there are serious scope or
                            requirements problems, or that the programmers did not fully understand what it is the
                            software was supposed to do.
                          • The Defect Resolution Rate tracks the time taken to resolve defects, from the time that
                            they are entered until the time that they are closed. It is calculated by dividing the num-
                            ber of non-closed problems by the average time to close. This can be used to predict
                            release dates by extrapolating from the current open defects.




                                                                                       SOFTWARE TESTING  195
   198   199   200   201   202   203   204   205   206   207   208