Page 147 - Introduction to Autonomous Mobile Robots
P. 147
Chapter 4
132
In order to carry out the calibration step of step 2 above, we must find values for twelve
unknowns, requiring twelve equations. This means that calibration requires, for a given
scene, four conjugate points.
The above example supposes that regular translation and rotation are all that are required
to effect sufficient calibration for stereo depth recovery using two cameras. In fact, single-
camera calibration is itself an active area of research, particularly when the goal includes
any 3D recovery aspect. When researchers intend to use even a single camera with high pre-
cision in 3D, internal errors relating to the exact placement of the imaging chip relative to
the lens optical axis, as well as aberrations in the lens system itself, must be calibrated
against. Such single-camera calibration involves finding solutions for the values for the
exact offset of the imaging chip relative to the optical axis, both in translation and angle,
and finding the relationship between distance along the imaging chip surface and external
viewed surfaces. Furthermore, even without optical aberration in play, the lens is an inher-
ently radial instrument, and so the image projected upon a flat imaging surface is radially
distorted (i.e., parallel lines in the viewed world converge on the imaging chip).
A commonly practiced technique for such single-camera calibration is based upon
acquiring multiple views of an easily analyzed planar pattern, such as a grid of black
squares on a white background. The corners of such squares can easily be extracted, and
using an interactive refinement algorithm the intrinsic calibration parameters of a camera
can be extracted. Because modern imaging systems are capable of spatial accuracy greatly
exceeding the pixel size, the payoff of such refined calibration can be significant. For fur-
ther discussion of calibration and to download and use a standard calibration program, see
[158].
Assuming that the calibration step is complete, we can now formalize the range recovery
problem. To begin with, we do not have the position of P available, and therefore
( x' y' z', l l , l ) and x' y' z',( r r , r ) are unknowns. Instead, by virtue of the two cameras we have
pixels on the image planes of each camera, x y z,,( ) and x y z,( , ) . Given the focal
l l l r r r
f
P
length of the cameras we can relate the position of to the left camera image as follows:
x l x' l y l y' l
---- = ----- and ---- = ----- (4.31)
f z' f z'
l l
Let us concentrate first on recovery of the values z' l and z' r . From equations (4.30) and
(4.31) we can compute these values from any two of the following equations:
x l y l x r
r ---- + r ---- + r 13 z' + r 01 = ----z' r (4.32)
l
11
12
f
f
f