Page 178 -
P. 178
166 5 Neural Networks
Let us now consider the two-class cork stoppers classification problem, already
studied in the previous chapter (features N and PRT10). Using the Statistics Neural
Networks module we can train a perceptron in order to solve this problem (using a
learning rate of 0.5 and a step function as activation function as seen in (5-16a)).
Training the network in batch mode with 200 epochs, we can see in the training
error graph of Figure 5.18 (also known as learning curve) that the error decreases
until stabilizing, always in a jumpy way. The overall classification error is ll%,
with 4 misclassifications for class w, and 7 for class a. If we use the logistic
function instead of the step function, a smoother convergence and a similar
solution is obtained.
When training perceptrons it is customary to scale the inputs, as described in
detail in section 5.5.2. The previous solution for the cork stoppers was obtained
using a scaling of the inputs to the [0, 11 interval, and computing the following
scaled features:
The perceptron weights computed for these scaled features are:
WN = - 4.78; WPRT~~ = 7.68; wo = 1.223 (bias).
Figure 5.19. Linear discriminants for two classes of cork stoppers. Dotted line:
Statistical classifier. Solid line: Perceptron.
Using these weights it is now a simple matter to compute the linear discriminant
for the perceptron as: