Page 193 - Classification Parameter Estimation & State Estimation An Engg Approach Using MATLAB
P. 193
182 SUPERVISED LEARNING
5. Assuming that the prior probability density of the error rate of a classifier is uniform
between 0 and 1=K, give an expression of the posterior density p(Ejn error , N Test )
where N Test is the size of an independent validation set and n error is the number of
misclassifications of the classifier. ( )
6. Derive the dual formulation of the support vector classifier from the primal formula-
tion. Do this by setting the partial derivatives of L to zero, and substituting the results
in the primal function. ( )
7. Show that the support vector classifiers with slack variables gives almost the same
dual formulation as the one without slack variables (5.56). ( )
8. Derive the neural network weight update rules (5.65) and (5.66). ( )
9. Neural network weights are often initialized to random values in a small range, e.g.
< 0:01, 0:01>. As training progresses, the weight values quickly increase. How-
ever, the support vector classifier tells us that solutions with small norms of the
weight vector have high generalization capability. What would be a simple way to
assure that the network does not become too nonlinear? ( )
10. Given the answer to exercise 9, what will be the effect of using better optimization
techniques (such as second-order algorithms) in neural network training? Validate
this experimentally using PRTools lmnc function. ( )