Page 48 - Rapid Learning in Robotics
P. 48

34                                                           Artificial Neural Networks


                                ad-hoc to the half mean distance of the base centers. The output val-
                                ues w i are learned supervised. The RBF net can be combined with
                                a local linear mapping instead of a constant w i (Stokbro, Umberger,
                                and Hertz 1990), as described below. RBF network algorithms which
                                generalize the constant sized radii   (sphere) to individually adapt-
                                able tensors (ellipses) are called “Generalized Radial Basis Function
                                networks” (GRBF), or “Hyper-RBF” (see Powell 1987; Poggio and
                                Girosi 1990).



                                                    2

                                                             x
                                              1
                                                                                4
                                                       3


                          Figure 3.3: Distance versus topological distance. Four RBF unit center points u i

                          (denoted 1  4) around a test point x (the circles indicate the width  ). Account-

                          ing only for the distance jx   u i j, the RBF output (Eq. 3.6) weights u   stronger
                          than u   . Considering the triangles spanned by the points 123 versus 234 reveals
                          that x is far outside the triangle 123, but in the middle of the triangle 234. There-
                          fore, x can be considered closer to point 4 than to point 1 — with respect to their
                          topological relation.





                          Topological Models and Maps are schemes, which build dimension re-
                                ducing mappings from a higher dimensions input space to a low-
                                dimensional set. A very successful model is the so-called “feature
                                map” or “Self-Organizing Map” (SOM) introduced by Kohonen (1984)
                                and described below in Sec. 3.7. In the presented taxonomy the SOM
                                has a special role: it has a localized knowledge representation where
                                the location in the neural layer encodes topological information beyond
                                Euclidean distances in the input space (see also Fig. 3.3). This means
                                that input signals which have similar “features” will map to neigh-
                                boring neurons in the network (“feature map”). This topological pre-
                                serving effect works also in higher dimensions (famous examples are
                                Kohonen's Neural Typewriter for spoken Finnish language, and the se-
                                mantic map, where the similarity relationships of a set of 16 animals
   43   44   45   46   47   48   49   50   51   52   53