Page 87 - Introduction to AI Robotics
P. 87
70
3 Biological Foundations of the Reactive Paradigm
3.1.2 Agency and computational theory
Even though it seems reasonable to explore biological and cognitive sciences
for insights in intelligence, how can we compare such different systems: car-
bon and silicon “life” forms? One powerful means of conceptualizing the
different systems is to think of an abstract intelligent system. Consider some-
AGENT thing we’ll call an agent. The agent is self-contained and independent. It has
its own “brains” and can interact with the world to make changes or to sense
what is happening. It has self-awareness. Under this definition, a person is
an agent. Likewise, a dog or a cat or a frog is an agent. More importantly,
an intelligent robot would be an agent, even certain kinds of web search en-
gines which continue to look for new items of interest to appear, even after
the user has logged off. Agency is a concept in artificial intelligence that al-
lows researchers to discuss the properties of intelligence without discussing
the details of how the intelligence got in the particular agent. In OOP terms,
“agent” is the superclass and the classes of “person” and “robot” are derived
from it.
Of course, just referring to animals, robots, and intelligent software pack-
ages as “agents” doesn’t make the correspondences between intelligence any
clearer. One helpful way of seeing correspondences is to decide the level at
which these entities have something in common. The set of levels of com-
COMPUTATIONAL monality lead to what is often called a computational theory 88 after David
THEORY Marr. Marr was a neurophysiologist who tried to recast biological vision
processes into new techniques for computer vision. The levels in a computa-
tional theory can be greatly simplified as:
Level 1: Existence proof of what can/should be done. Suppose a roboticist
is interested in building a robot to search for survivors trapped in a build-
ing after an earthquake. The roboticist might consider animals which seek
out humans. As anyone who has been camping knows, mosquitoes are
very good at finding people. Mosquitoes provide an existence proof that
it is possible for a computationally simple agent to find a human being
using heat. At Level 1, agents can share a commonality of purpose or
functionality.
Level 2: Decomposition of “what” into inputs, outputs, and transforma-
tions. This level can be thought of as creating a flow chart of “black
boxes.” Each box represents a transformation of an input into an output.
Returning to the example of a mosquito, the roboticist might realize from
biology that the mosquito finds humans by homing on the heat of a hu-