Page 140 -
P. 140
126 CHAPTER 5 Surveys
The question is often asked if the responses from electronic (web-based or e-mail)
surveys are as trustworthy or valid as paper surveys. There is no evidence to suggest
that people are more dishonest in online surveys than in paper surveys, as people can
lie easily in both. However, there is evidence that people, when delivering bad news,
are more honest in online communication than face to face (Sussman and Sproull,
1999). There is also evidence that people, when they care about a topic, are likely
to be very honest. If the surveys can be submitted anonymously, this may also lead
to an increased level of self-disclosure (McKenna and Bargh, 2000; Spears and Lea,
1994). Therefore, web-based surveys can sometimes be superior to e-mailed surveys
(which clearly identify the respondent) for dealing with sensitive information (Sue
and Ritter, 2007). In addition, respondents to self-administered surveys tend to pro-
vide more honest answers to sensitive questions than in interviews (Couper, 2005).
Overall, the likelihood that someone will lie in an electronic survey is the same as the
likelihood that someone will lie in a paper-based survey.
In traditional paper-based surveys, individuals may have to sign an “informed
consent form” (also known as an institutional review board [IRB] or human subjects
form), acknowledging that they are aware that they are taking part in a research
project and giving their consent. There is debate as to how individuals can best give
informed consent when they respond to a survey online. For more information on
informed consent online, please see Chapter 15.
5.10 PILOT TESTING THE SURVEY TOOL
After a survey tool is developed, it is very important to do a pilot study (also known
as pretesting the survey) to help ensure that the questions are clear and unambigu-
ous. There are really two different areas of interest within a pilot study: the questions
themselves and the interface of the survey. While the interface features primarily re-
fer to web-based or e-mailed surveys, there are also interface features on paper-based
surveys. For instance, on a paper survey, there should be an examination of issues
such as the font face and type size, spacing, use of grids, and cover designs (Dillman,
2000). While these are theoretically different pilot testing sessions for the questions
and for the layout, in reality, they take place at the same time. See Chapter 10 for
more information on usability testing of a computer interface.
Dillman (2000) suggests a three-stage process of pretesting a survey, while noting
that it is rarely done thoroughly. The three stages are as follows:
1. Review of the survey tool by knowledgeable colleagues and analysts.
2. Interviews with potential respondents to evaluate cognitive and motivational
qualities in the survey tool.
3. Pilot study of both the survey tool and implementation procedures.
The idea of this three-stage process is that you start first with people who are
knowledgeable, but are not potential respondents. (Note that you start first with
expert nonrespondents, just as in usability testing in Chapter 10.) You begin with