Page 204 - Artificial Intelligence in the Age of Neural Networks and Brain Computing
P. 204
194 CHAPTER 9 Theory of the Brain and Mind: Visions and History
and in later articles extended the idea both to visual perception and motor control.
The method of temporal differences [10,11] was later applied to similar phenomena.
The temporal difference method is closely related to networks that incorporate
error correction. Motor control, for example, involves comparing the current
position of muscles with a target position (e.g., Ref. [44]). Error correction has
been applied to cognitive information processing in back propagation networks
[4]. In fact [20] initially developed what came to be known as back propagation
in order to control the parameters in time-series models.
In the 1970s, several modelers combined lateral inhibition and associative
learning in various ways to develop early multilevel networks for perceptual coding
(e.g., Refs. [45e49]). These models usually included a retinal and a cortical level,
with the cortical level learning a categorization of stimulus patterns impinging on
the retina. The categorization was based on learned retinal-to-cortical (bottom-up)
connections that learned to encode commonly presented patterns. Grossberg [50]
showed that for the categorization to be stable over time, the learned bottom-up
connections needed to be supplemented by learned top-down feedback. This
combination of bottom-up and top-down connections was the origin of what became
known as adaptive resonance theory (ART; [51]).
All of these principles were first suggested on psychological grounds but verified
many years later by data from neuroscience. The results that emerged from neurosci-
ence led in this century to refinements and extensions of the earlier models, and
the newer models increasingly incorporated explicit representations of brain regions.
3. NEURAL NETWORKS ENTER MAINSTREAM SCIENCE
Yet the scientists who labored in the neural network field between the 1960s and
1980s were not widely known and had to find academic appointments in more
traditional fields. Many have labeled that period the dark ages in the field, but Stephen
Grossberg at one plenary talk at the International Joint Conference on Neural
Networks (IJCNN) said it should instead be called a golden age, because it was
creative and spawned many of the key ideas of the field that are still in use.
This state of affairs changed in the 1980s with a surge of interest in the relation-
ships between neuroscience and artificial intelligence, at a more sophisticated level
than had occurred in the 1940s. Artificial intelligence researchers increasingly found
that the methods of symbolic heuristic programming that had dominated their field
were inadequate to handle situations that involved processing imprecise information.
Hence, after having abandoned interest in the brain for nearly 30 years, they started
turning back to neuroscience and psychology for possible answers to their problems.
At the same time, several specific publications in neural modeling, such as the article
of Hopfield [52] and the two-volume book edited by Rumelhart and McClelland [4],
caught the attention of psychologists and neuroscientists by showing that simple
networks could reproduce some of their data.