Page 432 -
P. 432
14.2 Online research 423
One important difference between online and in-person research is the potentially
complete anonymity of participants in online studies. When you meet a participant
face-to-face, you can usually make a pretty good guess about their age, gender, and
other demographic characteristics. The lack of face-to-face contact with online par-
ticipants makes verification of such details harder—you have no way of verifying
that your participants are male or female, old or young. This presents some recruit-
ing challenges, particularly if your research requires participants who meet certain
demographic constraints such as age or gender. If your only contact is via email or
other electronic means, you may not be able to verify that the person with whom
you are communicating is who he or she is claiming to be. Online studies that do
not require the participants to reveal their true identity (relying instead on email ad-
dresses or screen names) are highly vulnerable to deception. Certain incentives, such
as offering to enter participants in a draw for a desirable prize, might compound this
problem. For example, a survey aimed at a specific demographic group might draw
multiple responses from one individual, who might use multiple email addresses to
appear as if inquiries were coming from different people. Possible approaches for
avoiding such problems include eliminating incentives; requiring proof of demo-
graphic status (age, gender, disability, etc.) for participation; and initial phone or
in-person contact in order to provide some verification of identity. Since payment or
other delivery of incentives often requires knowing a participant's name and address,
verification of identity is often not an added burden.
14.2.4.3 Study design
Surveys (Lazar and Preece, 1999) (Chapter 5), usability evaluations (Brush et al.,
2004; Petrie et al., 2006), and ethnographic studies of support groups (Maloney-
Krichmar and Preece, 2005) have all been successfully completed online. Examples
of online usability studies have shown that both synchronous studies with domain
experts (Brush et al., 2004) and asynchronous studies with users with disabilities
(Petrie et al., 2006) have yielded results comparable to those that were found in tra-
ditional usability studies. Perhaps due to difficulties in sampling and controls, online
empirical studies of task performance are less common. One study of the influence
of informal “sketch-like” interfaces on drawing behavior used an online study as a
means of confirming the results of a smaller, traditional study. Results from the 221
subjects in the online study were highly consistent with the results from the 18 sub-
jects in the traditional, controlled study in the lab. The agreement between the two
sets of results provides a more convincing argument than the lab study on its own
(Meyer and Bederson, 1998).
Opinions differ on the appropriateness of online research for different types of
data collection. The lack of controls on the participant population might be seen
as a difficulty for some controlled, empirical studies. Others have argued that as
online research does not allow for detailed user observation, it is more appropriate
for quantitative approaches (Petrie et al., 2006). In the absence of any clear guide-
lines, it is certainly appropriate to design studies carefully and to clearly describe and
document the reasoning behind any designs that are adopted. When possible, hybrid