Page 207 -
P. 207
“But It Worked For Us!”
When a product is not tested in all environments in which it will be used, the tests will be
thrown off. The tests will yield defects, but it’s much more likely that users will find glar-
ing problems when they begin to use the software in their own environment. This is frus-
trating because the tests that were run would have found these problems, had they been
conducted in an environment that resembled actual operating conditions.
Sometimes the tests depend on data that does not really represent what the users would
input into the software. For example, a tester may verify that a calculation is performed
adequately by providing a few examples of test data and comparing the results calculated
by the software against results calculated by hand. If this test passes, it may simply mean
that the tester chose data with the same characteristics that the programmer used to write
the software in the first place. It may well be that when the software goes out to be used,
the users will provide all sorts of oddball data that may have unexpected and possibly even
disastrous results. In other words, it’s easy to verify that an addition function calculates “2
+ 2” properly; but it’s also important to make sure it does the right thing when the user
tries to calculate “2 + Apple.”
It’s common for a system to work fine with test data—even with what seems to be a lot of
test data—yet grind to a halt when put in production. Systems that work fine with a small
dataset or few concurrent users can die in real-world usage. It can be very frustrating and
embarrassing for the engineering team when the product that they were highly confident
in breaks very publicly, because the testers did not verify that it could handle real-world
load. And when this happens, it’s highly visible because, unless those heavy load condi-
tions are verified in advance, they only happen once the users have gained enough confi-
dence in the system to fully migrate to it and adopt it in their day-to-day work.
Some of the trickiest problems come about when there are differences—whether subtle or
large—between the environment in which the product is being tested and the environ-
ment that it will be used in. Operating systems change often: security patches are released,
and various components are upgraded or have different versions in the field. There may
be changes in the software with which the product must integrate. There are differences in
hardware setups. Any of these things can yield defects. It’s up to the software tester to
understand the environment the software will be deployed in and the kinds of problems
that could arise. If she does not do this, entire features of the software could be unusable
when it is released, because tests that worked fine on the tester’s computer suddenly
break when the software is out in the field.
SOFTWARE TESTING 199