Page 39 - Algorithm Collections for Digital Signal Processing Applications using MATLAB
P. 39
1. Artificial Intelligence 27
Thus the weight vector is adjusted as following
New weight vector = Old weight vector - μ*slope at the current cost value.
2
Cost function(C) = (Target-Output)
2
⇒ C= (Target- func (input * W +B))
Slope is computed by differentiating the variable C with respect to W and
is approximated as follows.
2((Target- func (input * W +B))* (-input)
=2*error*input
Thus W(n+1) =W(n) - 2*μ*error*(-input)
⇒ W(n+1) = W(n) + γ error * input
This is back propagation algorithm because error is back propagated for
adjusting the weights. The Back propagation algorithm for the sample
architecture mentioned in figure 1-9 is given below.
4.2 Algorithm
Step 1: Initialize the Weight matrices with random numbers.
Step 2: Initialize the Bias vectors with random numbers.
Step 3: With the initialized values compute the output vector corresponding
to the input vector [i 1 i 2 i 3 ] as given below. Let the desired target
vector be [t 1 t 2]
h 1=func1 (w 11*i 1+w 21*i 2+w 31*i 3 +b h)
o 1= func2 (w 11’*h 1 +b 1)
o 2= func2 (w 12’*h 1 +b 2)
Step 4: Adjustment to weights
a) Between Hidden and Output layer.
w 11’(n+1) = w 11’(n) + γ O (t 1-o 1) *h 1
w 12’(n+1) = w 12’(n) + γ O (t 2-o 2) *h 1