Page 69 - Artificial Intelligence for the Internet of Everything
P. 69
Intelligent Autonomous Things on the Battlefield 55
dataset for a massive 1000-class task. Whereas ImageNet data is labeled and
provides one main class per image, real field data is fraught with partially
occluded objects and ambiguous detections.
Dealing with limited samples means moving beyond the current state-of-
the-art in deep learning that seeks to learn efficient representations for entire
domains by only allowing each sample to influence the model a very small
amount. Embracing classical nonparametric techniques that treat each piece
of data as representative of its own local domain is one potential path to
sample-efficient learning. Overcoming the never-ending growth of data
required to define our model is an area of active research, but by allowing
the system to trade accuracy for efficiency, it is possible to keep the cost of
learning in check (Koppel, Warnell, Stump, & Ribeiro, 2017).
Truly intelligent agents, however, will not need to memorize data to
make sense of the changing world; rather they will be able to investigate only
a few examples in order to quickly learn how the current environment
relates to their experience. This technique of domain adaptation (Patel,
Gopalan, Li, & Chellappa, 2015) promises to allow models trained over
exhaustive datasets of benign environments to remain useful on a dynamic
battlefield. Whether by updating only a small portion of the model
(Chu, Madhavan, Beijbom, Hoffman, & Darrell, 2016), appealing to the
underlying geometry of the data manifolds (Fernando, Habrard, Sebban,
& Tuytelaars, 2013; Gong, Shi, Sha, & Grauman, 2012), or turning the
learning algorithms upon themselves to be trained how to adapt (Long,
Cao, Wang, & Jordan, 2015), future agents will maintain flexible represen-
tations of their learned knowledge, amenable to reinterpretation and reuse.
Flexibility of learning and knowledge is crucial: it will undoubtedly be
undesirable for an agent to enter a new environment with pretrained
(preconceived) absolute notions of how it should perceive and act. Instead,
an agent will always be formulating and solving learning problems; and the
role of training is to teach the system how it can do this as efficiently as pos-
sible, perhaps even in one learning step (Finn, Abbeel, & Levine, 2017). This
fascinating concept of meta-learning, or learning how to learn (Andrychowicz
et al., 2016), allows the agent to finally take advantage of both its knowledge
and experience to perceive and interact with the dynamic world and an
evolving team and mission.
The domains that agents must learn and understand are vast and complex.
A typical example might be a video snippet of events and physical surround-
ings for a robot, where the overwhelming majority of elements (e.g., pieces
of rubble) are hardly relevant and potentially misleading for the purposes of