Page 150 - Handbook of Biomechatronics
P. 150

Biomechatronic Applications of Brain-Computer Interfaces     147


              as well as using manual control. A later study by the same research group
              (Leeb et al., 2015) asked nine participants with motor disabilities (tetraplegia,
              myopathy, etc.) to control a telepresence robot using a shared control strat-
              egy similar to the one used by Carlson and Milla ´n (2013) for powered
              wheelchairs. The participants were able to successfully complete naviga-
              tional tasks in an unfamiliar environment, demonstrating that people with
              disabilities could use such technology to interact with friends, relatives,
              and health-care professionals in other buildings and perhaps even cities.
                 In a related example, Riechmann et al. (2016) trained participants to
              move an avatar through a three-dimensional virtual kitchen environment
              using codebook visually evoked potentials (cVEP), a method similar to
              SSVEPs. The virtual kitchen was presented on a screen from the avatar’s per-
              spective (similarly to a first-person computer game), and 8–12 different
              cVEP stimuli were overlaid on top of the kitchen. The cVEP stimuli con-
              sisted of four movement buttons (move forward/backward/right/left), four
              buttons for looking around (up/down/left/right), and up to four action but-
              tons (oven, cup, coffee machine, sink). Each button flashed at a different fre-
              quency and could be selected by looking at it, as in the standard SSVEP
              control paradigm. When the avatar moved, the view of the kitchen scene
              changed, but the cVEP stimuli remained in the same place. Furthermore,
              the movement and looking buttons were shown at all times while the action
              buttons were only shown if the corresponding kitchen item was within the
              view of the participant’s avatar. Participants were asked to use the cVEP
              interface to move around the kitchen and prepare cups of coffee using a
              sequence of five actions (get cup, put cup into machine, get water from sink,
              put water into coffee machine, turn coffee machine on). Individual desired
              commands (among the 8–12 buttons) were correctly classified with accura-
              cies of around 80%, and well-trained participants were able to complete the
              task with the BCI in approximately twice the time they needed when using a
              keyboard. While this may not seem like an impressive result, it is encour-
              aging for participants with severe impairments, who would not be able to
              use manual commands to perform such tasks.
                 A final interesting example of this application was recently presented at
              the Cybathlon 2016, a competition for participants with disabilities who
              compete against each other using assistive technologies. In the BCI disci-
              pline, 11 participants with tetraplegia competed against each other in a vir-
              tual environment where their avatars raced along a virtual obstacle course
              (Novak et al., 2018)(Fig. 4). The course had multiple repetitions of three
              different types of obstacles, and participants thus had to send one of three
   145   146   147   148   149   150   151   152   153   154   155