Page 166 -
P. 166
154 5 Neural Networks
Note that the amplitude a and phase angle 4 of the inputs are arbitrary. The
weights of the linear discriminant are updated at each incoming signal sample (i.e.
pattern-by-pattern) using formula (5-7d), therefore, since the iteration is along
time, the gradient descent method corresponds to the following weight adjustment:
wi (t + 1)= wi (t)- q~(t)x~ (t). (5-8)
For implementation purposes we can transform this iteration in time into
iteration in space as we have done in the ECG 5OHz.xls file, where each row
represents a new iteration step. Using a suitably low learning rate the linear
discriminant (filter) output will converge to the incoming noisez, as shown in
Figure 5.5. As a result, we will obtain at the error output the filtered ECG shown in
Figure 5.6. Note from both figures how the discriminant adjustment progresses
until it perfectly matches (regresses) the incoming noise. By varying the learning
rate the reader will have the opportunity to appreciate two things:
- Up to a certain point, increasing 17 will produce faster learning.
- After that, the learning step is so big that the process does not converge, in fact
it diverges quickly, producing a saturated output.
Figure 5.6. Filtered ECG using the LMS adjusted discriminant method with
learning rate @.002.
The reader can also change the amplitude and phase angle of the discriminant
inputs in order to see that their values are immaterial and that the discriminant will
make the right approximation whatever value of a and (I is used. In fact, the filter
will even track slow changes of frequency and phase of the incoming noise signal.
The influence of the 50Hz component of the ECG is negligible, since it is uncorrelated
with the noise.