Page 222 -
P. 222
Section 6.5 Shape from Texture 190
FIGURE 6.21: On the left, a textured surface, whose texture is a set of repeated elements,
in this case, spots. Center left, a reconstruction of the surface, made using texture
information alone. This reconstruction has been textured, which hides some of its imper-
fections. Center right, the same reconstruction, now rendered as a slightly glossy gray
surface. Because texture elements are repeated, we can assume that if different elements
have a significantly different brightness, this is because they experience different illumi-
nation. Right shows an estimate of the illumination on the surface obtained from this
observation. Notice how folds in the dress (arrows) tend to be darker; this is because, for a
surface element at the base of a fold, nearby cloth blocks a high percentage of the incident
light. This figure was originally published as Figure 4 of “Recovering Shape and Irradiance
Maps from Rich Dense Texton Fields,” by A. Lobay and D. Forsyth Proc. IEEE CVPR
2004 c IEEE, 2004.
6.5.2 Shape from Texture for Curved Surfaces
Shape from texture is more complicated for curved surfaces, because there are
more parameters to estimate. There are a variety of strategies, and there remains
no consensus on what is best. If we assume that a texture consists of repeated
small elements, then individual elements display no observable perspective effects
(because they are small). Furthermore, curved surfaces often span fairly small
ranges of depth, because if they curve fast enough, they must eventually turn away
from the eye (if they don’t, we might be able to model them as planes). All this
suggests that we assume the surface is viewed in an orthographic camera.
Now consider a set of elements on a curved surface. Each element is an
instance of a model element; you should think of the surface elements as copies of
the model, which have been placed on the surface at different locations. Each is
small, so we can model them as lying on the surface’s tangent plane. Each element
has different slant and tilt directions. This means that each image instance of the
element is an instance of the model element, that has been rotated and translated
to place it in the image, then scaled along the image slant direction. It turns out
that, given sufficient image instances, we can infer both the model element and the
surface normal at the element (up to a two-fold ambiguity; Figure 6.20) from this