Page 137 - Artificial Intelligence in the Age of Neural Networks and Brain Computing
P. 137

126    CHAPTER 6 Evolving and Spiking Connectionist Systems




                            Learning in the SNN is performed in two stages:
                         •  Unsupervised training, where spatiotemporal data is entered into relevant areas
                            of the SNNcube over time. Unsupervised learning is performed to modify the
                            initially set connection weights. The SNNcuber will learn to activate the same
                            groups of spiking neurons when similar input stimuli are presented, also known
                            as a polychronization effect [96].
                         •  Evolving supervised training of the spiking neurons in the output classification
                            module, where the same data that was used for unsupervised training is now
                            propagated again through the trained SNN and the output neurons are trained to
                            classify the spatiotemporal spiking pattern of the SNNcube into predefined
                            classes (or output spike sequences). As a special case, all neurons from the
                            SNNcube are connected to every output neuron. Feedback connections from
                            output neurons to neurons in the SNN can be created for reinforcement learning.
                            Different SNN methods can be used to learn and classify spiking patterns from
                            the SNNcube, including the deSNN [99] and SPAN models [116]. The latter is
                            suitable for generating motor control spike trains in response to certain patterns
                            of activity of the SNNr.
                            Memory in the NeuCube architecture is represented as a combination of the three
                         types of memory described below, which are mutually interacting:

                         •  Short-term memory, represented as changes of the PSP and temporary changes of
                            synaptic efficacy;
                         •  Long-term memory, represented as a stable establishment of synaptic efficacyd
                            long-term potentiation (LTP) and long-term depression (LTD);
                         •  Genetic memory, represented as a genetic code.

                            In NeuCube, similar activation patterns (called “polychronous waves”) can
                         be generated in the SNNcube with recurrent connections to represent short-term
                         memory. When using STDP learning, connection weights change to form LTP or
                         LTD, which constitute long-term memory.
                            Results of the use of the NeuCube suggest that the NeuCube architecture can be
                         explored for learning long (spatio-) temporal patterns and to be used as associative
                         memory. Once data is learned, the SNNcube retains the connections as a long-term
                         memory. Since the SNNcube learns functional pathways of spiking activities repre-
                         sented as structural pathways of connections, when only a small initial part of input
                         data is entered, the SNNcube will “synfire” and “chain-fire” learned connection path-
                         ways to reproduce learned functional pathways. Thus, a NeuCube can be used as an
                         associative memory and as a predictive system with a wide scope of applications.
                            Using the NeuCube computational platform, application systems can be developed
                         forlearning,classification,regression,ordataanalysisoftemporal-orspatio/spectrotem-
                         poraldata.Thefollowingstepsneedtofollowedasadesignandimplementationprocess:
                         1. Input data transformation into spike sequences;
                         2. Mapping input variables into spiking neurons;
   132   133   134   135   136   137   138   139   140   141   142