Page 127 - Contribution To Phenomenology
P. 127
120 TOMNENON
"problem'' and the resulting settled state as a whole is what is termed
the system's "solution."
A distinct and important feature of connectionist systems is the way
they "remember" things. The only things physically present in the system
are currently active representations. What in a classical system would be
"stored beliefs" are not located at any one place in the system. Instead,
the **weightings" are set so that appropriate representations can be
recreated given appropriate input, but these representations are not
located anywhere in the system except in the weightings as dispositions
to create these representations under the proper circumstances.
For what follows, I also need to mention two other features of such
systems and then allude to some of the characteristics they exhibit. Often
the systems will be organized into layers by introducing connections that
are asymmetrical in direction. For instance, one set of units will be
capable of receiving input and passing on activations to a selection of or
all of another set of units, which may themselves be interconnected, but
not connected back to any of the units in the first set. These are called
"feed-forward networks." Moreover, for certain tasks it appears ad-
vantageous or even necessary to set things up so that there are one or
more intervening layers between the so-called input layer and the one
which is interpreted as the output layer.
As a second point it is also important to note another feature that
these systems exhibit as dynamical systems. In connectionist systems, even
the slightest differences in initial settings can be hugely magnified during
the process so that vastly different outcomes can result from the same
input (the kinds of differences that "chaos theories" in mathematics and
physics describe, for instance). Conversely, however, it is also possible
for exactly the same input to result in exactly the same output in two
different systems whose hidden layers may be configured quite differently,
or which may even be lacking hidden layers, but have very different
initial weightings which may have cancelled each other out in a specific
case. Taken together with the "chaos factor," this means that one cannot
rely on the observation of similar responses on the part of two different
systems when presented with similar input in the past in order to predict
that their responses to some identical input in the future will even closely
resemble each other. Consequently, even though there is a micro-level
determinacy about what will happen given a certain initial state of the
system and a specific input, there is no way to derive strict laws about
how two different systems will perform relative to each other in the

