Page 34 - Artificial Intelligence in the Age of Neural Networks and Brain Computing
P. 34

8. A General Hebbian-LMS Algorithm      21




                  to partition an input dataset into K clusters based on the distances between each
                  input instance and K centroids. This algorithm is usually fast to converge, relatively
                  simple to compute, and effective in many cases. However, the number of clusters
                  “K” is unknown in the beginning and it has to be determined by heuristic methods.


                  7.2 EXPECTATION-MAXIMIZATION ALGORITHM
                  Expectation-Maximization (EM) is an algorithm for finding maximum likelihood
                  estimates of parameters in a statistical model [16]. When the model depends on
                  hidden latent variables, this algorithm iteratively finds a local maximum likelihood
                  solution by repeating two steps: E-step and M-step. Its convergence is well-known
                  [17] and the K-means clustering algorithm is a special case of the EM algorithm.
                  Same as with the K-means algorithm, the number of clusters has to be determined
                  prior to applying this algorithm.


                  7.3 DENSITY-BASED SPATIAL CLUSTERING OF APPLICATION WITH
                      NOISE ALGORITHM
                  Density-based spatial clustering of application with noise is one of the well-known
                  density-based clustering algorithms [18]. It repeats the process of grouping close
                  points together until there is no point left to group. After grouping, the points that
                  do not belong to any group become outliers and are labeled as noise. In spite of
                  the popularity and effectiveness of this algorithm, its performance significantly
                  depends on two threshold variables that determine the grouping.

                  7.4 COMPARISON BETWEEN CLUSTERING ALGORITHMS

                  We have tested several clustering methods with artificial datasets such as the multi-
                  variate Gaussian random dataset and some of the datasets from the UCI Machine
                  Learning Repository [19]. Overall performance of clustering with the Hebbian-
                  LMS algorithm is comparable to the results obtained with the existing algorithms.
                  These existing algorithms require us to determine model parameters manually or
                  to use heuristic methods. Hebbian-LMS only requires us to choose a value of the
                  parameter m, the learning step. In most cases, this choice is not critical and can be
                  made like choosing m for supervised LMS as described in detail in Ref. [7].



                  8. A GENERAL HEBBIAN-LMS ALGORITHM

                  The Hebbian-LMS algorithm applied to the neuron and synapses of Fig. 1.9 results
                  in a nicely working clustering algorithm, as demonstrated above, but its error signal,
                  a function of (SUM), may not correspond exactly to nature’s error signal. How
                  nature generates the error signal will be discussed below.
   29   30   31   32   33   34   35   36   37   38   39