Page 146 - Dynamic Vision for Perception and Control of Motion
P. 146
130 5 Extraction of Visual Features
the many pixels to be touched. Working with large receptive fields has proven to
be reasonably reliable and efficient here. However, if computing power will allow
color and texture processing with new area-based features, a new quality of recog-
nition and higher robustness will result. Therefore, a compromise has been found
that allows using some of the advantages of area-based features efficiently in con-
nection with the 4-D approach.
In road vehicle guidance where the viewing direction is essentially parallel to
the ground, this method offers some advantages. Due to the scaling effect of range
(distance x) in perspective mapping, features further away will be reduced in size;
this may cause trouble in the interpretation process for a stereotypical application
of pyramid-methods over larger image regions. In the upper rows, each pixel cov-
ers a much larger distance in range than in the lower ones.
Figure 5.4 with the inserted table shows the effect of distance in a vertical stripe
of an image, scaled by the camera elevation H above the ground; the same stripe
width in the real world on the ground shows up in a decreasing number of rows
with distance.
f Optical axis horizontal;
H 20.5
19.5
0.5 1 2 3 4 5 6 7 ==> L/H 9 10 10.5
L/H 4 5 7 10 20 30
Zo/ pel 167 136 100 71 36.6 24.6
Zu / pel 214 167 115 79 38.5 25.4
'Z / pel 47 31 15 8 1.9 0.8
Figure 5.4. Mapping of a horizontal slice at distance L/H (from Zu = (L/H – 0.5) to Zo
= (L/H + 0.5) into the image plane (focal length f = 750 pixel)
Confining regional representations to image slices or stripes at almost constant
distance, these problems may be reduced by proper selection of stripe width (see
Figure 5.3, upper part). Due to unknown road curvature, the road may appear any-
where in the image, and it may have a forking point somewhere. Therefore, the
horizontal stripes SB1 to SB4 in Figure 5.3 are selected as a bunch of regions ex-
tending over the entire image width. The resulting image intensity distributions are
shown in the lower part. In SB1, the road fork does not yet show up. SB2 has a
small dark section between two brighter ones (with almost the same total width in
the image between the outer edges, even though further away), indicating that the
road may have branched. This is confirmed in SB3 with a widened dark area in be-
tween. The value of stripe SB4 is doubtful in this case since the branched-off road
fills only a few pixels; with the hypothesis of a road fork from SB2, 3, it would be
more meaningful to search in a separate stripe for the off-going branch with prop-
erly adapted parameters in the next image, if possible with higher image resolution.
In other cases, the stripes need not cover the entire image width right from the
beginning but may be confined to some meaningful fraction depending on object

