Page 192 -
P. 192
Many people are skeptical when project managers warn them of potential problems with
deploying untested or poorly tested code. It’s important to remember some of the most his-
toric and costly defects that were caused by a “tiny” change in the code. In 1990, an engi-
neer in AT&T rolled out a very small change to a switch that had one defect in one line of
code (a misspelled “break” statement in C). The long-distance network failed for over 9
hours, causing over 65 million calls to fail to go through and costing AT&T an enormous
amount of money. There are plenty of other very costly examples: the NASA Mars orbiter
crash due to one team using metric units and another using English units, eBay’s outage in
1999 due to a poor database upgrade, the Pentium processor bug that had a few numbers off
in a floating point lookup table… all of these problems were “tiny” changes or defects that
cost an enormous amount of money. In the end, nobody cared how small the source of the
problem was: a disastrous problem was still disastrous, even if it was easy to solve.
This does not mean that the entire test battery needs to be run every single time a deploy-
ment is made or a change is made. What it means is that an informed decision must be
made, and risks assessed, before the test battery is cut down for any reason. For example,
test procedures that target specific areas of functionality could be reduced when changes
are limited, and when the risk of those changes is low. There is no one-size-fits-all test that
will result in proper coverage for your applications. Any time a limited change is made to
the software, it should be carefully considered, in order to make sure that the appropriate
tests are executed.
Test Automation
There are software packages available that allow a tester to automate test cases. Typically, this
software uses either a record-and-playback system where mouse movements and keystrokes
are recorded and played back into a user interface, a programming or scripting languages that
accesses the user interface using a class model, or a combination of the two. Automation can
be a powerful tool in reducing the amount of time that it takes to run the tests.
However, setting up and maintaining test automation adds an enormous amount of over-
head. Now, instead of simply writing a test case, that test case must be programmed or
recorded, tested, and debugged. A database or directory of test scripts must be maintained,
and since there can be hundreds or thousands of test cases for even a small project, there
will be hundreds or thousands of scripts to keep track of. What’s more, since the scripts
hook into the user interface of the software, there must be some plan in place to keep the
scripts working in case the user interface changes.
There have been some advances recently (at the time of this writing) that help cut down
on test automation maintenance tasks. These advances include canned functions to auto-
mate multiple tasks at once, generalization of scripts so that the tester refers to general
business processes instead of specific user interface interactions, and the use of databases
of test scripts that can be maintained automatically. But even with these advances, it is still
highly time-consuming to automate and maintain tests—it often means that test planning
takes several times as long as it would without automation.
184 CHAPTER EIGHT