Page 355 - Sensing, Intelligence, Motion : How Robots and Humans Move in an Unstructured World
P. 355
330 HUMAN PERFORMANCE IN MOTION PLANNING
chapters. This dictates a dramatic change in language and methodology. So far, as
we dealt with algorithms, concepts have been specific and well-defined, statements
have been proven, and algorithms were designed based on robust analysis. We had
definitions, lemmas, theorems, and formal algorithms. We talked about algorithm
convergence and about numerical bounds on the algorithm performance.
All such concepts become elusive when one turns to studying human motion
planning. This is not a fault of ours but the essence of the topic. One way to com-
pensate for the fuzziness is the black box approach, which is often used in physics,
cybernetics, and artificial intelligence: The observer administers to the object of
study—here a human subject—a test with a well-controlled input, observes the
results at the output, and attempts to uncover the law (or the algorithm) that
transfers one into the other.
With an object as complex as a human being, it would not be realistic to
expect from this approach a precise description of motion planning strategies
that humans use. What we expect instead from such experiments is a measure
of human performance, of human skills in motion planning. By using techniques
common in cognitive sciences and psychology, we should be able to arrive at
crisp comparisons and solid conclusions. Why do we want to do this? What are
the expected scientific and practical uses of this study?
One use is in the design of teleoperated systems—that is, systems with
remotely controlled moving machinery and with a human operator being a part
of the control and decision-making loop. In this interesting domain the issues
of human and robot performance intersect. More often than not, such systems
are very complex, very expensive, and very important. Typical examples include
control of the arm manipulator at the Space Shuttle, control of arms at the Inter-
national Space Station, and robot systems used for repair and maintenance in
nuclear reactors.
The common view on the subject is that in order to efficiently integrate the
human operator into the teleoperated system’s decision-making and control, the
following two components are needed: (1) a data gathering and preprocessing
system that provides the operator with qualitatively and quantitatively adequate
input information; this can be done using fixed or moving TV cameras and moni-
tors looking at the scene from different directions, and possibly other sensors; and
(2) a high-quality master–slave system that allows the operator to easily enter
control commands and to efficiently translate them into the slave manipulator
(which is the actual robot) motion.
Consequently, designers of teleoperation systems concentrate on issues imme-
diately related to these two components (see, e.g., Refs. 116–119). The implicit
assumption in such focus on technology is that one component that can be fully
trusted is the human operator: As long as the right hardware is there, the operator
is believed to deliver the expected results. It is only when one closely observes the
operation of some such highly sophisticated and accurate systems that one per-
ceives their low overall efficiency and the awkwardness of interactions between
the operator and the system. One is left with the feeling that while the two
components above are necessary, they are far from being sufficient.