Page 58 - Artificial Intelligence in the Age of Neural Networks and Brain Computing
P. 58
8. Why a Unified Theory Is Possible: Equations, Modules, and Architectures 45
These models illustrate how all neocortical areas combine bottom-up, horizontal,
and top-down interactions that embody variations of the same canonical laminar
cortical circuitry that is illustrated by Fig. 2.7. These specialized laminar architectures
hereby provide a blueprint for a general-purpose VLSI chip set whose specializations
may be used to embody different kinds of biological intelligence as part of an
autonomous adaptive agent. From the perspective of ART as a biological theory,
they also illustrate how different resonances may use similar circuits to support
different conscious experiences, as I will note in greater detail below.
8. WHY A UNIFIED THEORY IS POSSIBLE: EQUATIONS,
MODULES, AND ARCHITECTURES
There are several fundamental mathematical reasons why it is possible for human
scientists to discover a unified mind-brain theory that links brain mechanisms and
psychological functions, and to demonstrate how similar organizational principles
and mechanisms, suitably specialized, can support conscious qualia across modalities.
One reason for such intermodality unity is that a small number of equations
suffice to model all modalities. These include equations for short-term memory,
or STM; medium-term memory, or MTM; and long-term memory, or LTM, that I
published in The Proceedings of the National Academy of Sciences in 1968. See
Refs. [12,13] for recent reviews of these equations.
These equations are used to define a somewhat larger number of modules, or
microcircuits, that are also used in multiple modalities where they can carry out
different functions within each modality. These modules include shunting
on-center off-surround networks, gated dipole opponent processing networks,
associative learning networks, spectral adaptively timed learning networks, and
the like. Each of these types of modules exhibits a rich, but not universal, set of
useful computational properties. For example, shunting on-center off-surround
networks can carry out properties like contrast normalization, including discounting
the illuminant; contrast enhancement, noise suppression, and winner-take-all choice;
short-term memory and working memory storage; attentive matching of bottom-up
input patterns and top-down learned expectations; and synchronous oscillations and
traveling waves.
Finally, these equations and modules are specialized and assembled into modal
architectures, where “modal” stands for different modalities of biological intelligence,
including architectures for vision, audition, cognition, cognitive-emotional interac-
tions, and sensory-motor control.
An integrated self or agent, with autonomous adaptive capabilities, is possible
because it builds on a shared set of equations and modules within modal architectures
that can interact seamlessly together.
Modal architectures are general-purpose, in the sense that they can process any
kind of inputs to that modality, whether from the external world or from other modal
architectures. They are also self-organizing, in the sense that they can autonomously