Page 37 - Algorithm Collections for Digital Signal Processing Applications using MATLAB
P. 37
1. Artificial Intelligence 25
commonly Usually used ‘func’ are
logsig (x) = 1
___________
[1+exp(-x)]
tansig(x) = 2
_____________
[1+exp(-2x)]
The requirement is to find the common Weight Matrix and common Bias
vector such that for the particular input vector, computed output vector must
match with the expected target vector. The process of obtaining the Weight
matrix and Bias vector is called training.
Consider the architecture shown in the figure 1-9. Number of neurons in
the input layer is 3 and number of neurons in the output layer is 2. Number of
th
neurons in the hidden layer is 1.The weight connecting the i neuron in the
th
first layer and j neuron in the hidden layer is represented as W ij The weight
th
th
connecting i neuron in the hidden layer and j neuron in the output layer is
represented as W ij’ .
Let the input vector is represented as [i 1 i 2 i 3]. Hidden layer output is
represented as [h 1] and output vector is represented as [o 1 o 2]. The bias
vector in the hidden layer is given as [b h] and the bias vector in the output
layer is given as [b 1 b 2]. Desired ouput vector is represented is [t1 t2]
The vectors are related as follows.
h1=func1 (w 11*i 1+w 21*i 2+w 31*i 3 + b h )
o1= func2 (w 11’*h 1 +b 1)
o2= func2 (w 12’*h 1 +b 2)
t1~=o1
t2~=o2
4.1 Single Neuron Architecture
Consider the single neuron input layer and single neuron output layer.
Output= func (input * W +B)
Desired ouput vector be represented as ‘Target’
Requirement is to find the optimal value of W so that Cost function =
2
(Target-Output) is reduced. The graph plotted between the weight vector
‘W’ and the cost function is given in the figure 1-11.
Optimum value of the weight vector is the vector corresponding to the
point 1 which is global minima point, where the cost value is lowest.