Page 389 -
P. 389
13.3 Motion and position tracking 379
technology to collect otherwise unavailable data, such as the use of commercial
Doppler radar devices to sense sleep patterns without placing sensors on the body
(Rahman et al., 2015).
These custom sensing approaches might require help from engineers and signal-
processing efforts not necessarily found in HCI research teams, but the broad pos-
sibilities for innovation and insight can often be well worth the effort.
Motion and position-sensing devices have many potential applications in HCI
research, from assessing everyday activity such as posture, to studying activity while
using a system, to forming the basis for new input modalities. Although custom-
designed sensors will likely be the approach of choice to those with the engineering
capability who are truly interested in pushing the envelope, the availability of cheaper
and smaller sensors places these tools within the reach of many HCI researchers.
13.3.2 MOTION TRACKING FOR LARGE DISPLAYS AND VIRTUAL
ENVIRONMENTS
Some forms of HCI inherently require users to move around in space. Users of wall-
sized displays routinely move from one side to another, or up and down, just as teach-
ers in a classroom move to different parts of the room. Users of virtual environments
turn their heads, walk around, and move their hands to grasp objects. Collecting data
that will help understand patterns of motion—where do users move, how do they
move, and when do they do it?—requires data collection tools and techniques beyond
those used with desktop systems.
Motion-tracking tools using cameras and markers worn by study participants
can track motion through a large space. As the participant moves through space, the
cameras use the marker to create a record of where the participant went and when.
One study used this approach to examine activity in the course of using a wall-sized
display (24 monitors, arranged as 8 columns of 3 monitors each, see Figure 13.2) to
search and explore real-estate data. Researchers were interested to see whether users
would move around more (physical navigation) or use zooming and panning mecha-
nisms (virtual navigation).
Participants wore a hat with sensors for the motion-tracking system (Figure 13.3),
which recorded their activity. Different display widths—ranging from one column
to all eight columns—were used to study the effect of the width of the display.
Participants generally used virtual navigation less and physical navigation more with
wider displays. They also preferred physical navigation (Ball et al., 2007).
Researchers have used sensors that directly measure the position and orientation
of various body parts to answer questions about movement and activity in immersive
virtual environments. In one study, participants used a head-mounted display and a
3D mouse to interact with an immersive environment. Sensors monitored the position
of the head, arms, legs, or other appropriate body parts. This approach provided in-
sights into user activity in a variety of applications of virtual environments, including
the diagnosis of attention deficit hyperactivity disorder (ADHD) and neurological
rehabilitation of stroke patients (Shahabi et al., 2007).