Page 42 - Artificial Intelligence in the Age of Neural Networks and Brain Computing
P. 42

References     29




                     The network had 150 neurons per hidden layer. The input patterns had 50
                  components. The input training patterns were of 100 clusters, each cluster having
                  20 vectors randomly distributed about a centroid, with centroids randomly distrib-
                  uted in the 50-dimensional input space. The parameter r is defined as the ratio
                  between the standard deviation of the samples about a centroid and the average dis-
                  tance between centroids. The bigger the value of r, the more difficult the problem.
                  This is evident from the plots of Fig. 1.19.



                  ACKNOWLEDGMENTS
                  We would like to acknowledge the help that we have received from Neil Gallagher, Naren
                  Krishna, and Adrian Alabi. This chapter is based on B. Widrow, Y. Kim, and D. Park, “The
                  Hebbian-LMS Learning Algorithm,” in IEEE Computational Intelligence Magazine, vol.
                  10, no. 4, pp. 37e53, Nov. 2015.




                  REFERENCES
                   [1] D.O. Hebb, The Organization of Behavior, Wiley & Sons, 1949.
                   [2] G.-Q. Bi, M.-M. Poo, Synaptic modifications in cultured hippocampal neurons: depen-
                      dence on spike timing, synaptic strength, and postsynaptic cell type, Journal of Neuro-
                      science 18 (24) (1998) 10464e10472.
                   [3] G.-Q. Bi, Mu-M. Poo, Synaptic modifications by correlated activity: Hebb’s postulate
                      revisited, Annual Review of Neuroscience 24 (2001) 139e166.
                   [4] S. Song, K.D. Miller, L.F. Abbott, Competitive Hebbian learning through spike-timing-
                      dependent synaptic plasticity, Nature Neuroscience 3 (9) (2000) 919e925.
                   [5] B. Widrow, Y. Kim, D. Park, The hebbian-lms learning algorithm, IEEE Computational
                      Intelligence Magazine 10 (4) (2015) 37e53.
                   [6] B. Widrow, M.E. Hoff Jr., Adaptive switching circuits, in: IRE WESCON Convention
                      Record, 1960, pp. 96e104.
                   [7] B. Widrow, S.D. Stearns, Adaptive Signal Processing, Prentice-Hall, 1985.
                   [8] B. Widrow, Bootstrap learning in threshold logic systems, in: International Federation
                      of Automatic Control, 1966, pp. 96e104.
                   [9] W.C. Miller, A Modified Mean Square Error Criterion for Use in Unsupervised
                      Learning (Ph.D. thesis), Stanford University, 1967.
                  [10] R.W. Lucky, Automatic equalization for digital communication, Bell System Technical
                      Journal 44 (4) (1965) 547e588.
                  [11] R.W. Lucky, Techniques for adaptive equalization for digital communication, Bell
                      System Technical Journal 45 (2) (1966) 255e286.
                  [12] B. Widrow, A. Greenblatt, Y. Kim, D. Park, The no-prop algorithm: a new learning
                      algorithm for multilayer neural networks, Neural Networks 37 (2012) 182e188.
                  [13] B. Widrow, J.C. Aragon, Cognitive memory, Neural Networks 41 (2013) 3e14.
                  [14] J.A. Hartigan, M.A. Wong, Algorithm AS 136: a K-means clustering algorithm, Journal
                      of the Royal Statistical Society Series C (Applied Statistics) 28 (1) (1979) 100e108.
   37   38   39   40   41   42   43   44   45   46   47