Page 181 - Autonomous Mobile Robots
P. 181
Landmarks and Triangulation in Navigation 165
where N i is the number of the black pixels in the ith subarea shown
in Figure 4.7d. For instance the feature vector for the letter “L” in
Figure 4.7c is (0.2343, 0, 0, 0.246, 0, 0, 0.246, 0.168, 0.144) T.
We can find that in the NPD for “L,” no black pixel is in subarea 1,
2, 4, and 5; and the black pixels in subareas 7 and 8 are relatively
smaller than those of the subarea 0, 3, and 6. Because the features
are relative values instead of absolute ones, the feature values are
free from the different exposure level of the image, which causes the
different width of the character strokes. The features are robust to the
sloping digits, which may be due to a sloping camera. The image in
Figure 4.5b is an example for sloping digits. We found in the NPD
that the distribution of black pixels in each subarea of the NPD does
not change due to the slope. The feature extracted from it also proves
the same.
• Feature matching — It calculates the scalar products of the fea-
ture vector extracted in the earlier step and those from the features
library; then it will give out the result according to the minimum
scalar product. This step also contains a simple judgment of the
results if more than one probable region is found. The following
conditions are adopted to do this, if (i) five digits are found in
a region; (ii) the first character of a region is recognized as “L”;
(iii) the minimum scalar products are very small; and (iv) the region
is more probable to be the right one. Another function of this
step is to connect the characters recognized into a string accord-
ing to the right region judgment and positions of digits and then
output it.
4.4.2 Position Estimation
T
Assume that the robot position is expressed by the vector p = (x, y, θ) , and
three coordinate systems are adopted for our implementation:
• {W}: the global coordinates. The localization is to find out W P, the
position vector in {W}.
• {L}: the landmark coordinates. It is fixed on the current landmark
which is being seen. The original point is fixed on the position shown
in Figure 4.5a.
• {I}: the image coordinates. It is fixed on the image plane.
If a landmark is “seen” by the robot, it is able to identify the landmark and
get the position information of the landmark in {W} from the database, and
W
L
therefore the transformation matrix C is known. If the position in {L}, P,is
L
© 2006 by Taylor & Francis Group, LLC
FRANKL: “dk6033_c004” — 2006/3/31 — 16:42 — page 165 — #17