Page 45 - Autonomous Mobile Robots
P. 45
28 Autonomous Mobile Robots
form a curved surface instead of a planar surface used by other approaches.
The final obstacle detection result and map are displayed in Figure 1.8c and d,
respectively.
1.3.6 Sensor Fusion
The most important task of a VGS is to provide accurate terrain descriptions
for the path planner. The quality of terrain maps is assessed by miss rate and
false alarm. Here, the miss rate refers to the occurrence frequency of missing
a true obstacle while a false alarm is when the VGS classifies a traversable
region as an obstacle region. Imaging a stereo vision system with a frame rate
of 10 Hz will generate 3000 obstacle maps in 5 min. Even with a successful
classification rate of 99.9%, the system may produce an erroneous obstacle
map three times of which may cause an error in path planning. The objective of
sensor fusion is to combine the results from multiple sensors, either at the raw
data level or at the obstacle map level, to produce a more reliable description
of the environment than any sensor individually. Some examples of sensor
fusion are:
N-modular redundancy fusion: Fusion of three identical radar units can
tolerate the failure of one unit.
Fusion of complementary sensors: Color terrain segmentation results can
be used to verify 3D terrain analysis results.
Fusion of competitive sensors: Although both laser and stereo vision per-
form obstacle detection, their obstacle maps can be fused to reduce false
alarms.
Synchronization of sensors: Different sensors have different resolutions
and frame rates. In addition to calibrating all sensors using the same vehicle
coordinates, sensors need to be synchronized both temporally and spatially
before their results can be merged. Several solutions can be applied for sensor
synchronization.
An external trigger signal based synchronization: For sensors with external
trigger capability such as IR, color, and stereo cameras, their data capturing can
be synchronized by a hardware trigger signal from the control system of the
UGV. For laser or ladar, which do not have such capability, the data captured at
the time nearest to the trigger signal are used as outputs. In this case, no matter
how fast a laser scanner can scan (usually 20 frames/sec), the fusion frame rate
depends on the slowest sensor (usually stereo vision, around 10 frames/sec).
A centralized time stamp for each image from each sensor: In this case
sensors capture data as fast as they can. Since each sensor normally has its
own CPU for data processing, a centralized control system will send out a
standardizedtimestampsignaltoallCPUsregularly(say, every1h)tominimize
the time stamp drift.
© 2006 by Taylor & Francis Group, LLC
FRANKL: “dk6033_c001” — 2006/3/31 — 16:42 — page 28 — #28