Page 207 - Artificial Intelligence in the Age of Neural Networks and Brain Computing
P. 207
4. Is Computational Neuroscience Separate From Neural Network Theory? 197
conditioning [63]. Yet this modeling approach in some ways betrays its one-size-
fits-all origins. It is based on distributed representations of concepts when there is a
basis for making some models use more localized representations (see Ref. [64]
for discussion). Also, the LEABRA equations use both associative learning and
error-driven learning at every synapse, rather than using different types of learning
for different processes.
A second source of many CCN models, particularly in the area of conditioning,
is the temporal difference (TD) model of Sutton and Barto [10,11].The TD idea
that learning occurs when there is an error inrewardprediction(i.e.,apreviously
neutral stimulus becomes rewarding or a previously rewarding stimulus becomes
unrewarding) obtained experimental support from results on responses of
dopamine neurons in the midbrain [65e67]. Reward prediction was thought to
be implemented via dopamine neuron connections to the basal ganglia, and several
later variants of the TD model exploited this connection [68e70].These models
built on the notion of reinforcement learning with an actor and a critic, a design
that is also popular in control engineering (e.g., Ref. [71]).
The TD approach is popular with conditioning researchers because it is built
around a single and easily understandable principle, namely, maximization of
predicted future reward. Yet its very simplicity, with the implication that there is
a unique locus in the brain that controls Pavlovian learning, limits the predictive
applicability of this approach unless it is extended to incorporate principles that
suggest roles for regions not included in these articles, such as amygdala and
prefrontal cortex (see Ref. [72]; for a review).
Other CCN models come not from previous simpler neural models but from
neural elaborations of previous nonneural models from mathematical psychology.
This approach has been particularly fruitful in the area of category learning
[73e75]. Interestingly Love and Gureckis [75] noted the kinship of their model
with the adaptive resonance theory of categorization [51], which was based on
associative learning combined with lateral inhibition and opponent processing.
Finally, there are a number of CCN models which are refinements or extensions
of more abstract models that arose before current data were available but embodied
network principles based on cognitive requirements. The cognitive requirements
these networks were designed to fulfill are frequently based on complementary
pairs, such as learning new inputs without forgetting old ones, or processing both
boundaries and interiors of visual scenes (e.g., [76]). One example of this type of
CCN model is Grossberg and Versace [77], which extends the previously more
abstract adaptive resonance model to incorporate neural data about corticothalamic
interactions and the role of acetylcholine. Another example is. [78] which is built
on previously more abstract conditioning models and incorporates neural data
about dopaminergic prediction error and different roles for the amygdala and
orbitofrontal cortex.
My answer to the question posed by the title of this section is, no, computational
neuroscience, or at least computational cognitive neuroscience, is not separate from
neural network theory. That is, CCN is not a fundamental conceptual break from