Page 121 - Biomimetics : Biologically Inspired Technologies
P. 121

Bar-Cohen : Biomimetics: Biologically Inspired Technologies DK3163_c003 Final Proof page 107 21.9.2005 11:40pm




                    Mechanization of Cognition                                                  107

                    using the vector v (which has a 1 in the index of each active y field neuron and zeros everywhere
                                                            T
                    else). Then calculate the input intensity vector W v for the x field (this is the ‘‘reverse transmis-
                    sion’’ phase of the operation of the network) and again make active those neurons with largest or
                    near-largest values of input intensity. This completes one cycle of operation of the network.
                    Astoundingly, the state of the x field of the network will be very close to x k , the vector used as
                    the dominant base for the construction of u (as long as the number of modifications made to x k when
                    forming u was not too large).
                      Now expand your experiments by letting each u be equal to one of the x field stable states
                    x k with many (say half) of its neurons made inactive plus the union of many (say, 1 to 10) small
                    fragments (say, 3 to 8 neurons each) of other stable x field vectors, along with a small number
                    (say, 5 to 10) of active ‘‘noise’’ (randomly selected) neurons (see Figure 3.A.4). Now, when
                    operated, the network will converge rapidly (again, often in one cycle) to the x k symbol whose
                    fragment was the largest. When you do your experiments, you will see that this works even if that
                    largest fragment contains only a third of the neurons in the original x k .If u contains multiple stable
                    x field vector fragments of roughly the same maximum size, the final state is the union of the
                    complete x field vectors (this is an important aspect of confabulation not mentioned in Hecht-
                    Nielsen, 2005). As we will see below, this network behavior is essentially all we need for carrying
                    out confabulation.
                      Again, notice that to achieve the ‘‘neurons with the largest or near-largest, input excitation win’’
                    information processing effect, all that is needed is to have an excitatory operation control input
                    to the network which uniformly raises all of the involved neurons’ excitation levels (towards a
                    constant fixed ‘‘firing’’ threshold that each neuron uses) at the same time. By ramping up this
                    input, eventually a group of neurons will ‘‘fire’’; and these will be exactly those with the largest or
























                    Figure 3.A.4  Feature attractor function of the simple attractor network example. The initial state (top portion) of
                    the x neural field is a vector u consisting of a large portion (say, half of its neurons) of one particular x k (the neurons
                    of this x k are shown in green), along with small subsets of neurons of many other x field stable states. The network
                    is then operated in the x to y direction (top diagram). Each neuron of u sends output to those neurons of the y field
                    to which it is connected (as determined by the connection matrix W). The y field neurons which receive the most, or
                    close to the most, connections from active neurons of u are then made active. These active neurons are
                    represented by the vector v. The network is then operated in the y to x direction (bottom diagram), where the
                    x field neurons receiving the most, or close to the most, connections from active neurons of v are made active. The
                    astounding thing is that this set of active x field neurons is typically very close to x k , the dominant component of
                    the initial u input. Yet, all of the processing is completely local and parallel. As will be seen below, this is all that is
                    needed to carry out confabulation. In thalamocortical modules this entire cycle of operation (which is controlled by a
                    rising operation command input supplied to all of the involved neurons of the module) is probably often completed in
                    roughly 100 msec. The hypothesis of the theory is that this feature attractor behavior implements confabulation —
                    the universal information processing operation of cognition.
   116   117   118   119   120   121   122   123   124   125   126