Page 385 -
P. 385

13.2  Eye tracking  375




                  (Pian et al., 2016). Eye tracking has also been used extensively to understand visual
                  processes involved in interpreting complex biomedical data, such as cardiovascu-
                  lar data from electrocardiograms (Bond et al., 2015) and medical imaging, includ-
                  ing virtual pathology slides (Krupinski et al., 2006), cranial scans (Venjakob et al.,
                  2016), and other volumetric imaging (Venjakob and Mello-Thoms, 2015).
                     Beyond traditional desktop environments, eye tracking presents myriad opportu-
                  nities for augmented reality, particularly as lower-cost devices such become available
                  as commodity hardware. Sensors mounted on eyeglasses and headsets can track gaze
                  direction as users work in specialized environments or carry on day-to-day activities,
                  presenting opportunities for input, object recognition, and control. As a relatively
                  low-cost commodity system capable of gaze-tracking, Google Glass inspired signifi-
                  cant interest, leading to the development of novel software approaches for data col-
                  lection and analysis (Jalaliniya et al., 2015). Although Glass was not a commercial
                  success, and has since been discontinued, the increased of goggles for virtual reality
                  and augmented availability seems almost inevitable—commercial successes may be
                  just around the corner. Additional examples of the use of Google Glass in HCI re-
                  search can be found in Chapter 14.
                     Alternative approaches leverage the power of smartphones to enable mobile eye
                  tracking. Commercially available eye-tracking goggles have been combined with
                  smartphone software to map eye-gaze coordinates to locations on a wearer's smart-
                  phone screen (Paletta et al., 2014). Of course, the logical conclusion would be to use
                  the smartphone camera to do the eye tracking. Kyle Krafka and colleagues presented
                  such a system, based on data models collected from over 1450 people and trained via
                  a convolutional neural network, in a 2016 paper (Krafka et al., 2016).




                   MEASURING WORKLOAD
                   Workload is the effort associated with completing a task. As much of user
                   interaction design aims to develop tools that are easy to use, HCI researchers
                   and designers are often interested in assessing workload. Understanding
                   when and where a tool makes mental demands on users can help us identify
                   opportunities for improvement through redesign.
                      Unfortunately, workload can be very difficult to assess, as our mental
                   processes are not easily observed. To work around this limitation, researchers
                   have expended significant effort developing surveys such as the Subjective
                   Workaround Assessment Technique (SWAT) (Reid and Nygren, 1988), and
                   the NASA Task Load Index, or NASA-TLX (Hart and Staveland, 1988; Hart,
                   2006). The NASA-TLX is the most widely used of these instruments, having
                   been used in hundreds of studies. The TLX scale includes six questions
                   assessing mental demand, physical demand, temporal demand, performance,
                   effort, and frustration level, along with a protocol for assessing the relative
                   importance of these six measures to each specific task.

                                                                           (Continued)
   380   381   382   383   384   385   386   387   388   389   390