Page 204 - Decision Making Applications in Modern Power Systems
P. 204
Adaptive estimation and tracking of power quality disturbances Chapter | 6 167
1 X
ny
net o 5 W H n and; net o 5 max net t Þ ð6:48Þ
ð
no
N o t
n
where
m No. of input layers
n No. of hidden layers
o No. of output layers
t No. of training examples
C No. of classifications
Λ Smoothing parameter
A Input vector
j A 2 A to j Euclidean distance between the vectors, A and A to
ah
W mn Connection weight between the A and Y layers
hy
W no Connection weight between H and O layers
6.3.5 Support vector machine
Based on the statistical learning theory, an adaptive computational powerful
tool called SVM has been implemented by Vapnik for both regression and
classification [27,28]. It executes a nonlinear mapping of the input vectors to
a high-dimensional feature space, and to determine the generalization ability
of the classifier, optimal hyperplane has been implemented. For a given set
of training data belonging to different categories of the target variable, train-
ing algorithm of SVM fault classifier [29] builds a model that is represented
by features in space mapped, so that the features of separate category are
divided by a clear gap. Then a hyperplane is defined as the gap in which the
categories are separated. To maximize the gap between the categories a
radial basis function (RBF) has been implemented in this chapter as kernel
parameter, which makes the hyperplane optimal. After that, the features of
testing data set are mapped into the same plane that is hyperplane and is val-
idated by the trained SVM model [30]. The main advantages of SVM are
prone to overfitting, which does not converge into local minima and sparse
and gives a global solution. It is very important to select proper SVM para-
meters so that high accuracy in the classification of PQ events and good gen-
eralization performance can be achieved. For classification purpose, support
vector classifier (SVC) has been used in this chapter. For SVM parameters,
library of SVM (LIBSVM) [30], and for optimal value of parameters, parti-
cle swarm optimization (PSO) technique has been implemented in this chap-
ter. To make the hyperplane optimal, RBF is used as the kernel parameter,
which further maximizes the gap between the two categories. Two additional
parameters, namely, cost parameter or soft parameter (c) and gamma param-
eter (g), have been taken from LIBSVM. The soft parameter or cost parame-
ter (c) gives the trade-off between forced, rigid margin, and train errors, and
gamma parameter controls the shape and the radius of the hyperplane, and