Page 136 - Dynamic Vision for Perception and Control of Motion
P. 136
120 4 Application Domains, Missions, and Situations
4.3.3 Rule Systems for Decision-Making
Perception systems for driver assistance or for autonomous vehicle guidance will
need very similar sets of rules for the perception part (maybe specialized to some
task of special interest). Once sufficient computing power for visual scene analysis
and understanding is affordable, the information anyway in the image streams can
be fully exploited, since both kinds of application will gain from deeper under-
standing of motion processes observed. This tends to favor three separate rule
bases in a modular system: The first one for perception (control of gaze direction
and attention) has to be available for both types of systems. In addition, there have
to be two different sets, one for assistance systems and one for autonomous driving
(locomotion, see Chapters 13 and 14).
Since knowledge components for these task domains may differ widely, they
will probably be developed by different communities. For driver assistance sys-
tems, the human-machine-interface with many psychological aspects poses a host
of challenges and interface parameters. Especially, if the driver is in charge of all
safety aspects for liability reasons, the choice of interface (audio, visual, or tactile)
and the ways of implementing the warnings are crucial. Quite a bit of effort is go-
ing into these questions in industry at present (see the proceedings of the yearly In-
ternational Symposium on Intelligent Vehicles [Masaki 1992–1999]). Tactile inputs
may even include motion control of the whole vehicle. Horseback riders develop a
fine feeling for slight reactions of the animal to its own perceptions. The question
is whether similar types of special motion are useful for the vehicle to direct atten-
tion of the driver to some event the vehicle has noticed. Introducing vibrations at
the proper side of the driver seat when the vehicle approaches one of the two lane
markers left or right too closely is a first step done in this direction [Citroen 2004].
First correcting reactions in the safe direction or slight resistance to maneuvers in-
tended may be further steps; because of the varying reactions from the population
of drivers, finding the proper parameters is a delicate challenge.
For autonomous driving, the relatively simple task is to find the solution when
to use which maneuvers or/and feedback algorithms with which set of optimal pa-
rameters. Monitoring the process initiated is mandatory for checking actual per-
formance achieved in contrast to the nominal one expected. Statistics should be
kept on the behavior observed, for learning reasons.
In case some unexpected “event” occurs (like a vehicle changing into your lane
immediately in front of you without giving signs), this situation has to be handled
by a transition in behavior; reducing throttle setting or hitting the brakes has to be
the solution in the example given. These types of transitions in behavior are coded
in extended state charts [Harel 1987; Maurer 2000]; actual implementation and results
will be discussed in later chapters. The development of these algorithms and their
tuning, taking delay times of the hardware involved into account is a challenging
engineering task requiring quite a bit of effort.
Note that in the solution chosen here, the rule base for decision–making does
not contain the control output for the maneuvers but only the conditions, when to
switch from one maneuver or driving state to another one. Control implementation
is done at a lower level with processors closer to the actuators (see Section 3.4.4).

