Page 55 - Rapid Learning in Robotics
P. 55

3.8 Improving the Output of the SOM Schema                                               41


                 map encoding (i.e. node Location in the neural array) are advantageous
                 when the distribution of stochastic transmission errors is decreasing with
                 distance to the original data. In case of an error the reconstruction will
                 restore neighboring features, resulting in a more “faithful” compression.
                     Ritter showed the strict monotonic relationship between the stimulus
                 density in the m-dimensional input space and the density of the match-
                 ing weight vectors. Regions with high input stimulus density P  x  will be
                 represented by more specialized neurons than regions with lower stimu-
                 lus density. For certain conditions the density of weight vectors could be

                 derived to be proportional to P  x  , with the exponent     m
         m
                 (Ritter 1991).



                 3.8 Improving the Output of the SOM Schema


                 As discussed before, many learning applications desire continuous valued
                 outputs. How can the SOM network learn smooth input–output map-
                 pings?
                     Similar to the binning in the hyper-rectangular recursive partitioning
                 algorithm (CART), the original output learning strategy was the super-
                 vised teaching of an attached constant y a (or vector y a) for every winning
                 neuron a
                                                  F  x     y a                           (3.11)

                 The next important step to increase the output precision was the intro-
                 duction of a locally valid mapping around the reference vector. Cleve-
                 land (1979) introduced the idea of locally weighted linear regression for
                 uni-variate approximation and later for multivariate regression (Cleve-
                 land and Devlin 1988). Independently, Ritter and Schulten (1986) devel-
                 oped the similar idea in the context of neural networks, which was later
                 coined the Local Linear Map (“LLM”) approach.
                     Within each subregion, the Voronoi cell (depicted in Fig. 3.5), the output
                 is defined by a tangent hyper-plane described by the additional vector (or
                 matrix) B
                                                            x   w a
                                          F  x    y a   B a                              (3.12)

                 By this means, a univariate function is approximated by a set of tangents.
                 In general, the output F  x  is discontinuous, since the hyper-planes do not
                 match at the Voronoi cell borders.
   50   51   52   53   54   55   56   57   58   59   60