Page 135 - Artificial Intelligence in the Age of Neural Networks and Brain Computing
P. 135

124    CHAPTER 6 Evolving and Spiking Connectionist Systems




                            eSNN have several parameters that need to be optimized for an optimal perfor-
                         mance. Several successful methods have been proposed for this purpose, among
                         them are: quantum-inspired evolutionary algorithm (QiEA) [31] and quantum-
                         inspired particle swarm optimization method (QiPSO) [32].
                            Quantum inspired optimization methods use the principle of superposition of states
                         to represent and optimize features (input variables) and parameters of the eSNN [37].
                         Features and parameters are represented as qubits, which are in a superposition of 1
                         (selected) with a probability a, and 0 (not selected) with a probability b.When the
                         model has to be calculated, the quantum bits “collapse” into a value of 1 or 0.


                         3.2 APPLICATIONS AND IMPLEMENTATIONS OF SNN FOR AI
                         Numerous applications based on different SNN and more specifically, on eSNN, are
                         developed, for example,

                         •  Advanced spiking neural network technologies for neurorehabilitation [102];
                         •  Object movement recognition [103];
                         •  Multimodal audio and visual information processing [104];
                         •  Ecological data modeling and prediction of the establishment of invasive species
                            [105];
                         •  Integrated brain data analysis [106];
                         •  Predictive modeling method and case study on personalized stroke occurrence
                            prediction [107].
                            The full advantage of SNN in terms of speed and low computational cost can be
                         achieved when SNN are implemented in neuromorphic hardware platforms. Contrary
                         to the traditional von Neumann computational architecture, where memory, control,
                         and ALU are separated, in neuromorphic systems all these modules are integrated
                         together as they are integrated in the brain.
                            To make the implementation of SNN models more efficient, specialized neuro-
                         morphic hardware are developed, including:

                         •  A hardware model of an integrate-and-fire neuron [108];
                         •  A silicon retina [109];
                         •  INI Zu ¨rich SNN chips [110,111];
                         •  IBM True North [112]. The system enables parallel processing of 1 million
                            spiking neurons and 1 billion synapses;
                         •  DVS and silicon cochlea (ETH, Zurich) [113];
                         •  Stanford NeuroGrid [114]. The system has 1 million neurons on a board,
                            63 billion connections, and is realized as hybrid analog/digital circuits;
                         •  SpiNNaker [115]. The system is a general-purpose, scalable, multichip multicore
                            platform for real-time massively parallel simulations of large-scale SNN.
                            The neuromorphic platforms are characterized by massive parallelism, high
                         speed, and low power consumption. For their efficient application, they require
                         the development of SNN computational models for learning from data.
   130   131   132   133   134   135   136   137   138   139   140