Page 87 - Artificial Intelligence in the Age of Neural Networks and Brain Computing
P. 87

4. Conclusion     75
















                  FIGURE 3.8
                  Causality: local minimum and initial condition, nonconvex energy landscape, the
                  horizontal vector abscissas could be the input sensor vectors.

                  hippocampus for the associative memory storage after the image postprocessing.
                  Analyticity: First we define the analyticity to be represented by a unique energy/
                  cost function for those fuzzy attributes in term of membership function.
                     Causality: We can define the causality to be the 1-1 relationship from the initial
                  value to the answer of gradient descent value.
                     Experience: We define the experience to be analytical, as given a nonconvex en-
                  ergy function landscape (Fig. 3.8).
                     Derivation of Newtonian equation of motion, the BNN, from the Lyaponov
                  monotonic convergence as follows: Since we know DH brain   0
                                   !                                      2


                  DH brain   DH brain  D W i;j  D W i;j D W i;j   D W i;j
                         ¼                  ¼                ¼              0Q:E:D.
                    Dt       D W i;j   Dt         Dt    Dt          Dt
                                                                                (3.20)
                     Therefore, Newtonian equation of motion for the learning of synaptic weight ma-
                  trix follows from the brain equilibrium at minimum free energy (MFE) in the
                  isothermal Helmholtz sense


                                               D W i;j   DH brain
                                                                                (3.21)
                                                      ¼
                                       Newton:
                                                 Dt      D W i;j
                     It takes two to dance Tango. Unsupervised Learning becomes possible because
                  BNN has both neurons as threshold logic and housekeeping glial cells as input
                  and output.
                     We assume for the sake of the causality, the layers are hidden from the outside
                  direct input, except the first layer, and the l-th layer can flow forward to the layer
                  lþ1, or backward to l-1 layer, etc.
                                                              . 0
                     We define the Dendrite Sum from all the firing rate S of lower input layer rep-
                                                           . 0  i
                  resented by the output degree of uniformity entropy S as the following net Dendrite
                                                             i
                  vector:
                                            .              . 0
                                         Dendrite j h  X  W i;j S               (3.22)
                                                             i
                                                     i
   82   83   84   85   86   87   88   89   90   91   92