Page 35 - Dynamic Vision for Perception and Control of Motion
P. 35

1.5  What Type of Vision System Is Most Adequate?      19


            on their own by associating data from sensors with background knowledge stored
            internally.
              Chapter 4 displays several different kinds of knowledge components useful for
            mission performance and for behavioral decisions in the context of a complex
            world with many different objects and subjects. This is way beyond actual visual
            interpretation and takes more extended scales in space and time into account, for
            which the foundation has been laid in Chapters 2 and 3. Chapter 4 is an outlook
            into future developments.
              Chapters 5 and 6 encompass procedural knowledge enabling real-time visual in-
            terpretation and scene understanding. Chapter 5 deals with extraction methods for
            visual features as the basic operations in image sequence processing; especially the
            bottom-up mode of robust feature detection is treated here. Separate sections deal
            with efficient feature extraction for  oriented edges  (an “orientation-selective”
            method) and a new orientation-sensitive method which exploits local gradient in-
            formation for a collection of features: “2-D nonplanarity” of a 2-D intensity func-
            tion approximating local shading properties in the image is introduced as a new
            feature separating homogeneous regions with approximately planar shading from
            nonplanar intensity regions.  Via the planar shading model, beside  homogeneous
            regions with linear 2-D shading, oriented edges are detected including their precise
            direction from the gradient components [Hofmann 2004].
              Intensity corners can be found only in  nonplanar  regions;  since the  planarity
            check is very efficient computationally and since nonplanar image regions (with
            residues • 3% in typical road scenes) are found in < 5% of all mask locations,
            computer–intensive corner detection can be confined to these promising regions. In
            addition, most of the basic image data needed have already been determined and
            are used in multiple ways.
              This bottom-up image feature extraction approach is complemented in Chapter
            6 by specification of algorithms using predicted features, in which knowledge
            about object classes and object motion is exploited for recognizing and intelligent
            tracking  of  objects and subjects over time. These  recursive estimation schemes
            from the field of systems dynamics and their extension to perspective mapping as
            measurement  processes constitute the core of Chapter 6. They are based on dy-
            namic models for object motion and provide the link between image features and
            object description in 3-D space and time; at the same  time, they are the major
            means for data fusion. This chapter builds on the foundations laid in the previous
            ones. Recursive estimation is done for n single objects in parallel, each one with
            specific parameter sets depending on the object class and the aspect conditions. All
            these results are collected in the dynamic object data base (DOB).
              Chapters 7 to 14 encompass system integration for recognition of roads, lanes,
            other vehicles, and corresponding experimental results. Chapter 7 as a historic re-
            view shows the early beginnings. In Chapter 8, the special challenge of initializa-
            tion in dynamic road scene understanding is discussed, whereas Chapter 9 gives a
            detailed description of various application aspects for recursive road parameter and
            ego-state estimation while cruising. Chapter 10 is  devoted to the perception of
            crossroads and to  performing autonomous turnoffs  with active vision.  Detection
            and tracking of other vehicles is treated in Chapter 11.
   30   31   32   33   34   35   36   37   38   39   40