Page 241 - Lean six sigma demystified
P. 241

Chapter 6  Tr an S a C T iona L   Six   Sigm a          219


                             IT managers and application users often expect a new software project or
                           enhancement release of an application to be flawless, and then are stunned by
                           the additional staffing required to stem the tide of rejected transactions.
                             The secret is to
                             1. Quantify the cost of correcting these rejected transactions
                             2. Understand the Pareto pattern of rejected transactions
                             3. Analyze 30 rejected transactions one by one to determine the root cause

                             4. Revise the requirements and modify the system to prevent the problem.

                    Service Order Case Study


                           Information systems invariably fail to capture all of the requirements necessary
                           to facilitate smooth processing of all transactions. So every system is designed
                           with places to capture the fallout and turn it over to people for correction.
                           Unfortunately, little of this information is fed back into improving the informa-
                           tion systems. Huge error correction units blossom to handle the errors that can’t
                           or won’t be corrected until some future release of the information system.
                             Every system produces a variety of error types and, following 4-50 rule, only
                           a few error types contribute most of the overall fallout. The beauty of applying
                           Six Sigma to information system fallout is that virtually every occurrence of
                           these errors can be eliminated completely.
                             If you count all of the requirements, design, and code defects found in inspec-
                           tions, unit test, integration test, and system test, most software groups have high
                           error rates—somewhere between two to three sigma. We’ve learned to expect
                           defects in software, long development times, and high costs. The goal of the

                           Dirty 30 process is to find and fix the worst software problems first. Let’s look
                           at how the Dirty 30 process helped in this case study.
                             Problem: Service order fallout from a telephone company’s information sys-
                           tems was running at 17% (at 30,000 errors per month). This caused problems
                           with activation, fulfillment, and billing of wireless phones as well as customer
                           disconnect rate (also called churn rate), almost twice the industry average.
                             Process: Typical root cause analysis simply does not work because of the level of
                           detail required to understand each error. Detailed analysis of 30 errors in each of
                           the top six error “buckets” (i.e., the Dirty 30) led to a breakthrough in understand-
                           ing of how errors occurred and how to prevent them. Simple check sheets allowed
                           the root cause to pop out from analysis of this small sample. As expected, the errors
                           clustered in three main categories: add, change, and delete of customer accounts.
   236   237   238   239   240   241   242   243   244   245   246