Page 200 -
P. 200
By and large, these myths are not true. However, there are some people who were
attracted to software testing specifically because of these myths and, as a result, turn them
into self-fulfilling prophecies. There’s a tiny minority of people in the software testing
world who really do enjoy being nasty to programmers. Often, these people do not have
much skill in software testing. However, these people represent the exception and not the
rule. Treating all software testers based on this stereotype is always counterproductive.
Divide Quality Tasks Efficiently
When looked at from a high level, it seems that testing tasks can be easily divided between
testers and programmers. The programmers are responsible for making sure that the soft-
ware does what they intended it to do; the testers are responsible for making sure that the
software does what the users and stakeholders intended it to do. However, when testing is
actually under way, there are often grey areas of responsibility that emerge between pro-
gramming and QA that can be a source of contention that inside the team. It is up to the
project manager to make sure that tasks are distributed according to what is efficient, and
by how people define their jobs.
Many people mistakenly believe that software testers can test “everything” in an applica-
tion. When a defect is found after the software is released, they expectantly look to the QA
team to figure out why they did not catch it. Sometimes it is true that QA should have
caught a defect. If there is a requirement written into the specification or design that is
planned, but the software does not properly implement the requirement or design, the
testers should have caught it. If there are test cases that were supposed to execute the
behavior and were marked as “passed,” then it is reasonable to assume that a tester made
a mistake. But it is unreasonable to think that every client complaint, every crash, or
every “bug” should be caught by QA. Not only is it unreasonable, it is simply impossible to
do this. The job of a software tester is to verify that the software met its requirements. If
there is a core behavior that users will expect and that stakeholders need, the only way for
the tester to know this is for a requirements analyst or designer (or a project manager, if
necessary!) to write down exactly how the software should behave. It is unreasonable to
expect the tester to come up with that independently.
Quality is everyone’s responsibility. Some programmers look at QA as a sort of “quality
dumping ground” where all quality tasks can be relegated. Study after study, book after
book, and, most importantly, practical experience show that this approach fails every
time. Software testers just can’t tack quality onto a product at the end of the project. Qual-
ity must be planned in from the beginning, and every project team member needs to do
his or her part to make sure that the defects are caught.
Consider the example of a defect that is found in a test that is very complicated—for
example, it may occur in only one environment with a specific set of data. In some organi-
zations, rules put in place by management require that the software testers spend days
researching what might be causing this problem: reconstructing scenarios, attempting to
reproduce it in different environments, trying to recreate corrupted data...all of which are
highly time-consuming. This is a very inefficient use of engineering time, if it’s the case
192 CHAPTER EIGHT