Page 158 - Rapid Learning in Robotics
P. 158
144 Summary
we should enlarge our view towards mappings which produce other mappings
as their result. Similarly, this embracing consideration received increasing
attention in the realm of functional programming languages.
To implement this approach, we used a hierarchical architecture of
mappings, called the “mixture-of-expertise” architecture. While in principle
various kinds of network types could be used for these mappings, a practi-
cally feasible solution must be based on a network type that allows to con-
struct the required basis mappings from a rather small number of training
examples. In addition, since we use interpolation in weight/parameter
space, similar mappings should give rise to similar weight sets to make
interpolation of expertise meaningful.
We illustrated three versions of this approach when the output map-
ping was a coordinate transformation between the reference frame of the
camera and the object centered frame. They differed in the choice of the
utilized T-BOX. The results showed that on the T-BOX level the learning
PSOM network can fully compete with the dedicated engineering solu-
tion, additionally offering multi-way mapping capabilities. At the META-BOX
level the PSOM approach is a particularly suitable solution because, first,
it requires only a small number of prototypical training situations, and
second, the context characterization task can profit from the sensor fusion
capabilities of the same PSOM, also called Meta-PSOM.
We also demonstrated the potential of this approach with the task of 2D
and 3D visuo-motor mappings, learnable with a single observation after
changing the underlying sensorimotor transformation, here e.g. by repo-
sitioning the camera, or the pair of individual cameras. After learning by
a single observation, the achieved accuracy compares rather well with the
direct learning procedure. As more data becomes available, the T-PSOM
can be fine-tuned to improve the performance to the level of the directly
trained T-PSOM.
The presented arrangement of a basis T-PSOM and two Meta-PSOMs
further demonstrates the possibility to split the hierarchical “mixture-of-
expertise” architecture into modules for independently changing parame-
ter sets. When the number of involved free context parameters is growing,
this factorization is increasingly crucial to keep the number of pre-trained
prototype mappings manageable.
The two hierarchical architectures, the “mixture-of-expert” and the in-
troduced “mixture-of-expertise” scheme, complement each other. While