Page 345 -
P. 345
12.2 Existing tools 335
understand how users choose to share different types of content with different people
on the network (Kairam et al., 2012).
12.2.1.3 Empirical studies
Empirical studies of task performance times require some means of capturing tim-
ing data. Although hand-held stopwatches can do this job admirably, software that
measures and records elapsed times between starting events and task completion is
usually more reliable and easier to work with. As described later in this chapter, this
approach has been used extensively in special-purpose software built specifically for
HCI studies.
For experimental tasks involving selections that can be presented as links on
web pages, web servers and their logs present an ideal platform for gathering em-
pirical task performance data. In this model a web server is run on the same ma-
chine that is used to perform the experimental tasks. This eliminates any delays
associated with requesting materials over a network connection. The selection of a
link from a starting page indicates the beginning of the task, with subsequent link
selections indicating intermediate steps. Eventually, a link indicating successful
task completion is selected. The elapsed interval between the selection of the start
and completion links is the task completion time, with access records of intermedi-
ate requests indicating steps that were taken to complete the task and the elapsed
time for each subtask.
This method is not without drawbacks. Extraction of the relevant information
from logs may require manual interpretation or implementation of special-purpose
log analysis software. Timestamps in server log files time events by the second, so
this approach is not suitable for studies that require finer task-time resolution.
Web browser caches may cause additional problems. These caches store local
copies of pages that have been recently accessed. If a user requests a page that is in
the cache, the browser returns the copy that has been stored locally, instead of mak-
ing a new request from the web server. This may cause problems if you are trying to
track every user request, as requests for cached pages might not generate web server
1
log entries. You may want to turn off caching facilities before using a particular
browser to run an experiment.
One helpful strategy for keeping data clean and clear is to start each session with
an empty log file. After the session is complete, the file can be moved to a separate
directory containing all of the data for the given subject. This simplifies analysis
and prevents any problem associated with disentangling multiple participants from
a longer log file.
In practice, these drawbacks usually do not create serious problems. The
“Simultaneous vs. Sequential Menus” sidebar describes a study that used server logs
to compare alternative web menu designs.
1 Then again, they might. It all depends on the server configuration. However, it is best to be defensive
about such matters: assume that they will not and take appropriate steps.