Page 85 - Biomimetics : Biologically Inspired Technologies
P. 85
Bar-Cohen : Biomimetics: Biologically Inspired Technologies DK3163_c003 Final Proof page 71 21.9.2005 11:40pm
Mechanization of Cognition 71
be learned (by rehearsal), stored, and recalled using confabulation architectures (e.g., the UCSD
graduate students in my course built a confabulation-based checker-playing system that learned to
play — trigger appropriate actions — by mimicking a skilled human). Confabulation architectures
for appropriately modifying action sequences, in real time, in response to changes in the world state
that occur during execution (a crucial capability if we are to perform in a complicated, real-world
environment) have also been developed.
Obviously, when the conclusion–action principle ‘‘branching’’ capability is combined with an
ability to store and retrieve data (e.g., using short-term, medium-term, long-term memory or
working memory), the cognitive brain passes the test of being, at least conceptually, capable of
universal computation in the Turing sense. However, the very limited ‘‘RAM memory’’ or ‘‘tape
memory’’ available for immediate reading and writing probably limits the value of this capability.
Certainly, as demonstrated in Hecht-Nielsen (2005), logical reasoning in Aristotelian information
environments is carried out directly by confabulation (cogency maximization); without need for
any recourse to computer principles. Nonetheless, a human with a paper and pencil (to supplement
the extremely limited ‘‘RAM memory’’ available in the brain) can easily learn to carry out thought
processes that will accurately simulate operation of a computer. However, such a ‘‘human-imple-
mented computer’’ is to a modern desktop electronic computer as a unicycle is to racecar.
Given the parse of an assumed fact phrase (say, the first few words of a sentence), we can then
use the architecture of Figure 3.2 to carry out ‘‘phrase completion’’ (as in Hecht-Nielsen, 2005). The
first step is to build an expectation in the phrase lexicon above the next-word’s lexicon by activating
the knowledge bases between the last active phrase lexicon of the parse and that ‘‘target’’ phrase
lexicon and doing a C1F. This first step exploits the fact that adjacent phrases are usually highly
coherent and it would be rare indeed for the next phrase to not receive knowledge links from the last
known phrase of the parse. The result of this first step is an expectation on the target phrase lexicon
containing all of the reasonable next phrases. Note that the last-phrase lexicon of the parse may
itself have an expectation containing multiple symbols which themselves could contain the next
word.
For example, as above, if sudden were not present in the starting word string phrase: The
canoe trip was going smoothly when all of a sudden, the last-phrase lexicon would have an
expectation with multiple phrases, including: all of a, all of a sort, all of a sudden, all of a kind,
etc. If, for example, all of these symbols represent multi-word phrases then the target phrase lexicon
expectation will automatically be empty (since none of the phrases in the last-phrase lexicon’s
expectation will have any knowledge links to symbols of that lexicon). If this is not clear, using
Figure 3.2, work out some examples using a diagram on a piece of paper. This is a perfect example
of how all thought processes are conclusion-driven.
The expectations established by the above process then send output through their knowledge
links to the first unfilled word lexicon; where an expectation is formed by a C1F. Since this word
lexicon is the next one after the last assumed fact word lexicon, we can again assume that the symbols
in this expectation represent all reasonable possibilities for the next word of the continuation. Then
knowledge linking the rest of the parsed phrase symbols to the word lexicon is used with a W to select
that word symbol in the expectation which is most consistent with this additional context. Here again,
there are many possible things that could go on (e.g., knowledge links may or may not exist from
various phrase symbols to words of the expectation); yet, whatever the situation, this process works
better than that using the architecture of Figure 3.1. A bit of time spent thinking about this phrase
completion process with some concrete examples will be most illuminating and compelling. Try to
build some meaningful examples where this process will not work. You won’t be able to.
Why would this phrase completion process (using the architecture of Figure 3.2) be better than
just using the word-level knowledge; as described earlier in connection with Figure 3.1? The
answer is that knowledge links from phrases to words generally have two superior characteristics
over links at the word level. First, the parse often removes a significant amount of ambiguity that
can exist in word-level knowledge. For example, the word lexicon symbol for the word New will