Page 206 -
P. 206
and experience. In an organization like this, it is not uncommon to draft technical support
staff, junior programmers, end users, outside temps, and sales people as “testers.” They see
their job as simply “banging on the software” and providing comments as to whether or
not they like it. If the programmers have not done sufficient unit testing (see Chapter 7), it
is likely that these people will find places where the software breaks or crashes. However,
it’s a crapshoot: while they may find valid, great defects, it’s likely that many more will
slip through unnoticed.
The problem is that it’s not enough to understand the business of the organization. To test
the software, a tester needs to be more than an educated user; she needs to understand
the requirements and be able to verify that they have been implemented. When the
testers do not have a good understanding of the software requirements, they will miss
defects. There are many reasons testers may have this problem. If there are not good
requirements engineering practices in place at the organization, the testing strategy will
certainly go awry. This happens when there are uncontrolled changes to requirements,
when requirements and design documents are constantly changing, or with software
development that is not based on specifications at all (for example, when stakeholders
have approached programmers directly).
Programmer “gold plating” is especially problematic for testing. Most programmers are
highly creative, and sometimes they have a tendency to add behavior to the software that
was not requested. It is difficult for testers—especially ones who are not working from
requirements documents—to figure out what behavior in the software is needed by the
users, and what is extraneous. However, all of the software must work, and since the
gold-plating features may never have been written down (or even discussed), the tester
has difficulty figuring out what the software is even expected to do.
In some organizations, the testers are completely out of the loop in the software project
until the product is complete. The programmers will simply cut a build and “throw it over
the wall” to the testers; the testers are somehow supposed to intuit whether or not the
software works, despite the fact that they were not involved in the requirements, design,
or programming of the software, and have no prior experience with it. The testers are not
told what the software is supposed to do; they are just supposed to make the software
“perfect.”
But most commonly, defects slip through because of schedule pressure. The most effec-
tive, careful, and well-planned testing effort will fail if it is cut short in order to release the
software early. Many project managers have trouble resisting the urge to cut testing and
release the current build early. They see a build of the software that seems to run. Since
the software testing activities are always at the tail end of the software project, they are
the ones that will be compressed or cut when the project runs late and the project man-
ager decides to release the software untested.
198 CHAPTER EIGHT