Page 77 - Anatomy of a Robot
P. 77
02_200256_CH02/Bergren 4/17/03 11:24 AM Page 62
62 CHAPTER TWO
In the equation, S(t) is a vector of step sizes that might change with time but can be
fixed. This vector could contain, in our example, two fixed step size values, each
roughly proportional to 5 percent of the average size of X1 and X2. An alternate method
could have the vector contain two varying step size values, each roughly proportional
to 5 percent of X1 and X2’s present values. The point is X1 and X2 will change gradu-
ally in a particular direction in order to satisfy control system requirements. If the cost
function C(X(t)) shows that X1 must increase, then the time iteration of the equation
will bump X1 up by the step size. If the cost function shows that X2 must decrease, then
the time iteration of the equation will bump X2 down by the step size.
C(X(t)), a vector of cost functions based on X(t), is yet to be defined. The cost func-
tion is a measure of the “pain” the control system is experiencing because the values
(past and present) of X(t) do not match the desired values of Xd(t). We use the deriva-
tive (d C(X(t))/d X(t)) because we want the corrective step size
To be larger if the cost (pain) is mounting rapidly as X(t) changes the wrong way.
Thus, we must take more drastic corrective action.
To be smaller if the cost (pain) is not mounting rapidly as X(t) changes the wrong
way. We are near the desired operation area and are not in pain, so why move
much?
Such an iteration equation can be used as a solution for robotic control. But what’s
missing is the cost function. The proper choice of a cost function really determines the
behavior of the robot. Much of modern work on control systems revolves around the
choice of the cost function and how it is used during iteration.
One very popular framework to give the control system is the least squares frame-
work, discovered by Legendre and Gauss in the early nineteenth century (see
Figure 2-30). Termed the least mean square (LSM) algorithm, it sets the cost func-
tion C(X(t)) proportional to the sum of the squares of the errors in each element of
the vector:
C1X1t22 k 1X1t2 Xd1t22 2
n
where k is an arbitrary scaling constant.
In our specific example, we could set the cost function to the sum of the squares of
the errors:
2
2
C1X1t22 0.5 11X11t2 X1d1t22 11X21t2 X2d1t22 2
Differentiating by X1 and X2, we get the two elements of (d C(X(t))/d X(t)):
d C1X11t22>d X11t2 X11t2 X1d1t2
d C1X21t22>d X21t2 X21t2 X2d1t2