Page 149 - Handbook of Biomechatronics
P. 149

146                                                     Domen Novak


          forward on its own, thus requiring the user to only input actions if they want
          to change the wheelchair’s behavior. Obstacle avoidance is achieved by
          means of cameras and sonar sensors attached to the wheelchair; these sensors
          constantly scan the area around the wheelchair, creating an “occupancy
          grid” of nearby obstacles. If an obstacle is detected partially in the wheel-
          chair’s path, it is treated as a repeller in the occupancy grid, causing the
          wheelchair to automatically swerve to avoid it and then continue on its orig-
          inal path. However, if an obstacle is directly in front of the wheelchair, the
          wheelchair will slow down and smoothly stop in front of it, then remain
          stationary until the user executes a turn command via the BCI. This allows
          the user to “dock” with an object of interest (e.g., a table or sink) by aiming
          the wheelchair directly for it. Such a shared control paradigm successfully
          combines the intelligence and desires of the user with the precision of the
          machine, allowing experienced unimpaired users to complete tasks using
          the BCI approximately as fast as using a two-button manual input. We
          believe that such shared control, where users give high-level commands
          through a BCI and the machine takes care of low-level details, represents
          the future of practical BCI control and will be adopted by a broad range
          of applications.


          2.2 Control of Mobile Robots and Virtual Avatars

          The same principles described in the previous section can be used to control
          not only wheelchairs, but also all other types of mobile robots and even ava-
          tars in virtual environments. For example, in a classic study by Milla ´n et al.
          (2004), two participants were taught to steer a mobile robot through mul-
          tiple rooms using motor and mental imagery. Specifically, three images
          (relax, move left arm, move right for one participant; relax, move left
          arm, mental cube rotation) were translated into different robot commands
          by the BCI, with the exact interpretation of the mental state depending
          on the location of the robot. For example, if the robot was located in an open
          area, the “move left arm” motor image caused the robot to turn left; how-
          ever, if there was a wall to the robot’s left, “move left arm” caused the robot
          to follow the wall. In all situations, the “relax” image caused the robot to
          move forward and automatically stop when an obstacle was detected in front
          of it. Finally, three lights on top of the robot were always visible to the par-
          ticipants and indicated which of the three motor or mental images was cur-
          rently being detected by the BCI. Using this control approach, the two
          participants were able to complete steering and navigational tasks nearly
   144   145   146   147   148   149   150   151   152   153   154