Page 151 - Artificial Intelligence in the Age of Neural Networks and Brain Computing
P. 151
140 CHAPTER 7 Pitfalls and Opportunities in the Development of AI Systems
FIGURE 7.1
Of course, this is unfair to the cockroach, a highly intelligent critter with neurological
complexity far beyond any AI system yet developed [1,2]. You should appreciate how hard
it was to hold the little fella down while we wrote on its back.
Original image ArtyuStock j Dreamstime.
road kill on the AI superhighway, we need adherence to “rules of the road” in their
development and deploymentdmaybe even a hefty dose of common sense.
“Just whom are you calling stupid, buster?” Now I’ve offended my refrigerator,
and she’s threatening an ice cream melt down. I’d better clarify: The algorithm is
not at fault, it is just inadequate for the task at handdand we use it anyway. The
stupidity is thus closer to home: it is the developer’s and our own (mis)understanding
of CI capabilities versus the requirements of the job we task it to perform that is to
blame and leads us to use AI systems in inappropriate ways. AI system developers
are like parents everywhere: they love their children and tend to have inflated views
of their capabilities (even leaving out the developer’s natural incentive of avarice).
Similarly, AI users are generally a gullible lot, moving from the last “next big thing”
to its successor with fond hopes of miraculous results. So how do we get the most out
of AI? Or in contemporary usage “make AI great (again)”? There are some good
common sense rules that should be followed in the CI development process, and
much more attention needs to be paid to the evaluation phase. Fortunately, there is
already a mature field of performance assessment methodology ready to assist in
this undertaking. In what follows we will try to provide a road map of this field.
Fig. 7.2 illustrates our paradigm for the AI development and implementation
process. The first requirement is for a well-defined task, such as identification of
an approaching vehicle or detection of a malignant tumor, possibly in conjunction
with a human decision maker or as a component of a larger AI system. A CI agent
or “observer,” call it “Hal,” is developed to address this task using a collection of
data from which it can abstract certain relevant features and use these features to
make a decision. We will make the simplifying assumption that this is a binary
task for each of a number of cases, for example, patients or scenes. For example,
either a patient is abnormal (A) or is not (B); either an approaching object in a scene
is a bicycle (A) or a pedestrian (B). Our world of binary decisions is admittedly a