Page 23 -
P. 23
8 CHAPTER 1 Introduction to HCI research
introduces the reader to the set of generally accepted empirical research practices
within the field of HCI, a central question is, therefore, how do we carry out measure-
ment in the field of HCI research? What do we measure?
In the early days of HCI research, measurement was based on standards for hu-
man performance from human factors and psychology. How fast could someone
complete a task? How many tasks were completed successfully, and how many errors
were made? These are still the basic foundations for measuring interface usability
and are still relevant today. These metrics are very much based on a task-centered
model, where specific tasks can be separated out, quantified, and measured. These
metrics include task correctness, time performance, error rate, time to learn, reten-
tion over time, and user satisfaction (see Chapters 5 and 10 for more information
on measuring user satisfaction with surveys). These types of metrics are adopted
by industry and standards-related organizations, such as the National Institute of
Standards and Technology (in the United States) and the International Organization
for Standardization (ISO). While these metrics are still often used and well-accepted,
they are appropriate only in situations where the usage of computers can be broken
down into specific tasks which themselves can be measured in a quantitative and
discrete way.
Shneiderman has described the difference between micro-HCI and macro-
HCI. The text in the previous paragraph, improving a user's experience using well-
established metrics and techniques to improve task and time performance, could be
considered micro-HCI (Shneiderman, 2011). However, many of the phenomena that
interest researchers at a broader level, such as motivation, collaboration, social par-
ticipation, trust, and empathy, perhaps having societal-level impacts, are not easy to
measure using existing metrics or methods. Many of these phenomena cannot be mea-
sured in a laboratory setting using the human factors psychology model (Obrenovic,
2014; Shneiderman, 2008). And the classic metrics for performance may not be as
appropriate when the usage of a new technology is discretionary and about enjoy-
ment, rather than task performance in a controlled work setting (Grudin, 2006a).
After all, how do you measure enjoyment or emotional gain? How do you measure
why individuals use computers when they don't have to? Job satisfaction? Feeling of
community? Mission in life? Multimethod approaches, possibly involving case stud-
ies, observations, interviews, data logging, and other longitudinal techniques, may be
most appropriate for understanding what makes these new socio-technical systems
successful. As an example, the research area of Computer-Supported Cooperative
Work (CSCW) highlights the sociological perspectives of computer usage more than
the psychological perspectives, with a focus more on observation in the field, rather
than controlled lab studies (Bannon, 2011).
The old methods of research and measurement are comfortable: hypothesis
testing, statistical tests, control groups, and so on. They come from a proud his-
tory of scientific research, and they are easily understood across many different
academic, scientific, and research communities. However, they alone are not suf-
ficient approaches to measure all of today's phenomena. The same applies to the
“old standard” measures of task correctness and time performance. Those metrics