Page 46 - Rapid Learning in Robotics
P. 46
32 Artificial Neural Networks
the dot product with input x , usually augmented by a constant one.
This linear regression scheme corresponds to a linear, single layer net-
work, compare Fig. 3.1.
The classical approximation scheme is a linear combination of a suitable
set of basis functions fB i g on the original input x
m
X
F w x w i B i (3.3)
x
i
and corresponds to a network with one hidden layer. This represen-
tation includes global polynomials (B i are products and powers of
the input components; compare polynomial classifiers), as well as
expansions in series of orthogonal polynomials and splines.
Nested sigmoids schemes correspond to the Multi-Layer Perceptron and
can be written as
X X X
g
F w x g u k v kj g g w j i x i A A (3.4)
k j i
where g is a sigmoid transfer function, and w w j i kj v k u
denote the synaptic input weights of the neural unit. This scheme
of nested non-linear functions is unusual in the classical theory of
approximation (Poggio and Girosi 1990).
Projection Pursuit Regression uses approximation functions, which are
a sum of univariant functions B i of linear combinations of the input
variables:
m
X
F w x B i w (3.5)
i x
i
The interesting advantage is the straight forward solvability for affine
transformations of the given task (scaling and rotation) (Friedman
1991).
Regression Trees: The domain of interest is recursively partitioned in hyper-
rectangular subregions. The resulting subregions are stored e.g. as
binary tree in the CART method (“Classification and Regression Tree”
Breimann, Friedman, Olshen, and Stone 1984). Within each sub-
region, f is approximated - often by a constant - or by piecewise