Page 57 - Artificial Intelligence in the Age of Neural Networks and Brain Computing
P. 57
44 CHAPTER 2 Mind, Brain, Autonomous Agents, and Mental Disorders
FIGURE 2.7
The LAMINART model clarifies how bottom-up, horizontal, and top-down interactions
within and across cortical layers in V1 and V2 interblob and pale stripe regions,
respectively, carry out bottom-up adaptive filtering, horizontal grouping, and top-down
attention to carry out perceptual grouping, including boundary completion. Similar
interactions seem to occur in all six-layered cortices. See text for details.
Reprinted with permission from R. Raizada, S. Grossberg, S, Context-sensitive bindings by the laminar circuits of
V1 and V2: a unified model of perceptual grouping, attention, and orientation contrast, Visual Cognition 8 (2001)
431e466.
self-stabilize even before higher cortical areas are developed enough to send reliable
top-down intercortical attentional signals with which to further stabilize it. Thus
“cells that fire together can wire together” without risking catastrophic forgetting
in these laminar cortical circuits. I like to describe this property by saying that
“the preattentive perceptual grouping is its own attentional prime” [23].
The above combination of properties illustrates how parsimoniously and
elegantly laminar cortical circuits carry out their multifaceted functions.
Even elegant model designs must also support intelligent behavioral functions in
order to provide compelling explanations of how brains work, and a guide for new
technological developments. In fact, variations of the LAMINART cortical design
have, to the present, been naturally embodied in laminar cortical models of vision,
speech, and cognition that explain and predict psychological and neurobiological
data that other models have not yet handled. These models include the 3D LAMI-
NART model of 3D vision and figure-ground separation (e.g., Refs. [32,33,36e41]),
the cARTWORD model of conscious speech perception [35,42], and the LIST
PARSE model of cognitive working memory and chunking [34,43].