Page 531 - Practical Design Ships and Floating Structures
P. 531

506

            elements), each probably  having  a  small amount of  local memory.  The processors are  linked  by
            communication channels (synapses), which usually carry numeric data, encoded by  any of various
            means. The units run only on their local data and on the inputs they receive via the communication
            channels.
            Most artificial neural networks have some sort of "training" rule whereby the weights of synapses are
            adjusted on the basis of  data. In other words, artificial neural networks "learn" from examples (as
            children learn to recognise cats from examples of cats) and show some capability for generalisation
            beyond  the  training  data.  Simple  linear  regression  (a  minimal  feedforward  net  with  only  two
            processing elements plus  bias)  is  usehlly regarded as  special cases of  artificial neural  networks.
            Artificial  neural  networks  are  used  in  many  different fields  in  recent  years.  A  few  engineering
            application examples:
            P  Rudder force prediction (Koushan et a1 1998)
            P  Propeller induced pressure pulses (Koushan 2000)
            P  Robot control
            P  Pattern recognition
            Figure 1 demonstrates main components of a feed-forward recall artificial neural network. These are
            input layer, synapses, one or more hidden layers (mons) and an output layer.

                                                                         Output Axon







                                     - -  -- -_
                                        -
                                         -
                             '  Hidden Layer Synapse   Output Layer Synapse   Output Data
                  Figure 1 : Main components of a feed forward recall network with a single hidden layer
            Input layer is used to feed the network with data. This layer normalises usually data between 0 and 1 or
            between -1 to 1 depending on activation function used. Synapse is connecting different layers. Hidden
            layer  is  made  of  one  or  more  processing  elements and  corresponding activation function, which
            transforms the input of processing element. Typical activation functions are linear, sigmoid and tanh
            functions. Output layer operates usually like a hidden layer and can in addition denormalise the output.

            There are different procedures for training the network. Typical procedure is back propagation. In a
            backpropagation procedure, network starts with a random set of weights. Then output is compared to
            input and the error is verified. Then these errors are propagated backwards through the network to find
            better weights. Again there are different ways of reaching optimal weights by means of errors. Apart
            from weights and activation functions, number of hidden layers and number of processing elements in
            each hidden layer must also be optimised during the training. Usually one half part of database is used
            for training whereas the other half is used  for verification of the network, i.e. the network does not
            "see"  verification set during optimisation process.

            6 CONCLUSIONS

            The method presented is a novel procedure for prediction of  total resistance. A reliable prediction
            method is presented for the first time for prediction of total resistance of offshore vessels. To keep the
            method easy-to-use the number of input parameters are limited. This has the consequence of ignoring
   526   527   528   529   530   531   532   533   534   535   536