Page 189 - Artificial Intelligence in the Age of Neural Networks and Brain Computing
P. 189
4. Need for New Directions in Understanding Brain and Mind 179
4.1 TOWARD A CULTURAL REVOLUTION IN HARD NEUROSCIENCE
There have been hopes that politically correct clock-free models of learning based
only on bottom-up research in neuroscience could lead to chip designs which some-
how would be able to learn to predict, classify, and solve complex decision problems
better than the new mathematics being developed for ANNs. Many millions of
dollars have been spent using that philosophy, with questionable success at best.
It was very interesting to see how and why HP pulled out of the great DARPA SYN-
APS program [33]. That program did achieve some great things, but the outcome
reinforced the question: if bottom-up modeling of how the brain learns to predict
and to act has not really worked, why not try a top-down approach, moving from
the actual learning powers we see in the brain down to the circuits in the brain which
may implement them somehow? Since we now know that the calculations of deriv-
atives (exact or modulated) are crucial to AI, why not look to see how and where
these calculations might be implemented in the brain?
Back in 1993, when Simon Haykin arranged for Wiley to reprint my Ph.D.
thesis in its entirety [35] (reformatted to be more readable) along with a few
new papers to put it into context, the great neuropsychologist Karl Pribram wrote
an endorsement which appears on the back: “What a delight it was to see Paul
Werbos rediscover Freud’s version of ‘backpropagation.’ Freud was adamant (in
The Project for a Scientific Psychology) that selective learning could only take
place if the presynaptic neuron was as influenced as the postsynaptic neuron during
excitation. Such activation of both sides of the contact barrier (Freud’s name for
the synapse) was accomplished by reducing synaptic resistance by the absorption
of ‘energy’ at the synaptic membranes. Not bad for 1895! But Werbos 1993 is
even better.”
Freud’s work itself is incredibly complicated and variegated. Backpropagation
was developed as a kind of mathematical explanation and embodiment of the
specific ideas discussed very clearly, and attacked by a pair of modernist social
scientists attacking psychoanalysis in general [36]. Pribram himself probed much
deeper into Freud’s efforts to understand brain learning and intelligence in general
[37], but the details are beyond the scope of this simple chapter.
How does the brain actually implement a universal ability to learn to predict?
Here is a figure from my theory [32] of how it works:
If you compare Figs. 8.9e8.14 you will see that the big picture on the right here
matches easily, but the cutout picture on the left is more complicated than anything
in Fig. 8.9. That is because my theory assumes a kind of more advanced version
of autoencoder network taken from my work on mouse-level computational intelli-
gence (Section 3.3). The cutout on the left is an actual photograph of the famous “six
layers” of the higher mammal cerebral cortex, which has the same basic cutaway
structure in humans, mice, and rats.
If you look closely at Figs. 8.6 and 8.9, you will see reference to a number of very
important leading researchers whose work led me to this theory. I apologize that I
cannot say more here about that work, for reasons of length, but the papers I do