Page 111 - Rapid Learning in Robotics
P. 111

7.2 Sensor Fusion and 3 D Object Pose Identification                                      97


                 7.2 Sensor Fusion and 3 D Object Pose Identifi-

                         cation


                 Sensor fusion overcomes the limitation of individual sensor values for a
                 particular task. When one kind of sensor cannot provide all the neces-
                 sary information, a complementary observation from another sensory sub-
                 system may fill the gap. Multiple sensors can be combined to improve
                 measurement accuracy or confidence in recognition (see e.g. Baader 1995;
                 Murphy 1995). The concept of a sensor system can be generalized to a “vir-
                 tual sensor” – an abstract sensor module responsible for extracting certain
                 feature informations from one or several real sensors.
                     In this section we suggest the application of a PSOM to naturally solve
                 the sensor fusion problem. For the demonstration the previous planar
                 (2D) problem shall be extended.
                     Assume a 3 D object has a set of salient features which are observed
                 by one or several sensory systems. Each relevant feature is detected by a
                 “virtual sensor”. Depending on the object pose, relative to the observing
                 system, the sensory values change, and only a certain sub-set of features
                 may be successfully detected.
                     When employing a PSOM, its associative completion capabilities can
                 solve a number of tasks:

                       knowing the object pose, predict the sensor response;


                       knowing a sub-set of sensor values, reconstruct the object pose, and

                       complete further information of interest (e.g. in the context of a ma-
                       nipulation task pose related grasp preshape and approach path in-
                       formations);

                       generate hypotheses for further perception “schemata”, i.e. predict
                       not-yet-concluded sensor values for “guidance” of further virtual
                       sensors.



                 7.2.1 Reconstruct the Object Orientation and Depth

                 Here we want to extent the previous planar object reconstruction exam-
                 ple to the three-dimensional world, which gives in total to 6 degrees of
   106   107   108   109   110   111   112   113   114   115   116