Page 102 - Robots Androids and Animatrons : 12 Incredible Projects You Can Build
P. 102
to see. Programs (both neural and expert) must capture the video
image and process it (extrapolate data). Machine vision has been
achieved in limited and targeted areas.
Chapter 1 looked at the Papnet computer, which uses neural soft-
ware to analyze pap smear slides with a higher accuracy than can
be achieved by humans. Other researchers have developed vision
systems that can steer a vehicle based on the contours of the road
being driven.
Before we can attempt to simulate human vision, we need (in addi-
tion to developing improved image processing, which is no easy task
in itself) to develop stereoscopic mounted video cameras. Some re-
search in this area is taking place at the Massachusetts Institute of
Technology (MIT) on their humanoid robot, COG. With stereo-
scopic cameras, two video pictures must be processed and then
merged to create a three-dimensional (3D) representation. This is
the same process used in human 3D vision. To estimate depth, each
camera must be mounted on gimbals that allow the cameras to veer
in (converge) and focus on an object. The amount of convergence
is taken into consideration for judging the distance of objects.
Machine vision is a fertile field of development. Currently most
vision systems require a high-powered computer dedicated just 81
to vision processing.
Body sense
Body sense provides some information on where one is and what
position one is in. Limited body sense can be accomplished in robots
by using a variety of tilt switches (see Fig. 5.28). This will at least
5.28 Tilt switches
Team LRN Sensors