Page 252 -
P. 252
240 5 Neural Networks
at (0, -2) and (3, 1). Verify the convergence towards the local and global minimum,
respectively.
5.4 Using equations (5-12). explain why an LMS adjusted discriminant with sigmoid
activation function converges to the same solution as the Bayesian classifier. Restrict
the analysis to a two-class situation.
5.5 Classify the two-class cork stoppers data with a single perceptron, illustrated in Figure
5.19, using thresholds at the output in order to obtain an appropriate reject region.
5.6 Repeat the single perceptron experiment for the two-class classification of cork
stoppers, using activation functions other than hard-limiter. Compare the results and
learning curves.
5.7 Design appropriate MLPs for classification of the MLP datasets and observe the
influence of the learning and momentum parameters on the training:
a) For the MLPl and MLP2 data, derive the decision boundaries from the weight
values and confirm the constructive argument from section 5.5.
b) What is the structure of a multi-layer perceptron needed for the MLP3 data, if the
constructive argument applies?
c) Verify, using several training experiments with the structure previously
determined, that the constructive argument is not confirmed in the case of the
MLP3 data.
5.8 Change the class labels of the MLP3 patterns lying in the upper left shaded area of
Figure 5.20~ and train an MLP2:3: 1 classifier. Explain the results obtained.
5.9 Consider that a neural net has an energy function with 2 weights given by (5-4c).
a) Compute the eigenvectors and eigenvalues of the Hessian.
b) Compute the value of the learning parameter vmax, above which the gradient
descent starts to diverge.
C) Plot the curve showing how the distance to the minimum error evolves, along the
direction of the eigenvector corresponding to the minimum eigenvalue, using q=
&,,I2 and a starting distance of 10.
5.10 Use an MLP approach to classify the three classes of cork stoppers using features ART,
PRM, NG and RAAR (see section 4.2.4). Determine if there are weights with negligible
values that can be discarded, and compute the upper bound of the number of training
patterns sufficient for training before and after discarding negligible weights.
5.1 1 Design an MLP that predicts SONAE share values (StockExchange dataset) two-days
ahead, using the same external inputs as in the solution illustrated in Figure 5.28.
Compare the results obtained with those relative to one-day ahead prediction, using the
ranking index (5-290.
5.12 Repeat the previous exercise, using the Weather dataset.
5.13 Estimate the lower bound of the number of samples necessary for training the MLP
prediction one-day ahead of the SONAE share values, described in section 5.5.3.