Page 427 - Introduction to Information Optics
P. 427

412                   7. Pattern Recognition with Optics

          Hopfield, Perceptron, error-driven back propagation, Boltzman machine,
       and interpattern association (IPA) are some of the well-known supervised-
       learning NNs. Models such as adaptive resonance theory, neocognitron,
       Madline, and Kohonen self-organizing feature map models are among the
       best-known unsupervised-learning NNs. In the following subsection, we first
       discuss a couple of supervised learnings.


       7.8.1. RECOGNITION BY SUPERVISED LEARNING

          The Hopfield and interpattern association NNs, presented in Sec. 2.9, are
       typical examples of supervised learning models. For example, if an NN has no
       pre-encoded memory matrix (e.g., IWM), the network is not capable of
       retreiving the input pattern. Instead of illustrating a great number of supervised
       NNs, we will now describe a heteroassociation NN that uses the interpattern
       association algorithm.
          The strategy is to supervise the NN learning, so that the NN is capable of
       translating a set of patterns into another set. For example, we let patterns A,
       B, and C, located in a pattern space, to be translated into A', B', and C',
       respectively, as presented by the Venn diagrams in Fig. 7.47. Then a hetero-
       association NN can be constructed by a simple logic function, such as
                                                     1
                / = A A (B v C)            /' = A' A (B  v C)
                                                       r
                II = B A (Tv~C)            //' = B' A (A v~C)
                II! = C A (A v B)          ///' - C A (A' v B')
                IV = (A A B) A C           IV = (A A B') A C
                V = (B A C) A B            V = (B' A C') A ~A'
                VI = (C A A) A B           VI' = (C' A A') A ~B'
                ¥11 = (A A B A C) A 0      VII = (A' A B' A C') A 0

       where A , v and      stand for the logic AND, OR, and NOT operations,
       respectively, and 0 denotes the empty set.
          For simplicity, we assume that A, B, C, A', B', and C' are the input-output
       pattern training sets, as shown in Fig. 7.48a. Pixel 1 is the common pixel of
       patterns A, B, and C; pixel 2 is the common between patterns A and B; pixel
       3 is the common between A and C; and pixel 4 represents the special feature
       of pattern C. Likewise, from the output pattern, pixel 4 is the special pixel of
       pattern B', and so on. A single-layer neural network can therefore be construc-
       ted, as shown in Fig. 7.48b. Notice that the second output neuron representing
       the common pixel of patterns A', B', and C has positive interconnections from
       all the input neurons. The fourth output neuron, a special pixel of B', is
   422   423   424   425   426   427   428   429   430   431   432