Page 67 - Artificial Intelligence for the Internet of Everything
P. 67
Intelligent Autonomous Things on the Battlefield 53
recognized that human control is often impossible, especially because of
intermittent communications. The agent is therefore able to plan, analyze,
and perform most or all of its actions autonomously. Similarly, provisions are
made for the agent to collaborate with other agents (residing on other com-
puters); however, in most cases, because the communications are impaired
or observed by the enemy, the agent operates alone.
The enemy malware and its associated capabilities and techniques
evolves rapidly, as does the environment in general, together with the mis-
sion and constraints to which the thing is subject. Therefore the agent is
capable of autonomous learning.
Because the enemy malware knows that the agent exists and is likely to
be present on the computer, the enemy malware seeks to find and destroy
the agent. Therefore the agent possesses techniques and mechanisms for
maintaining a degree of stealth, camouflage, and concealment. More gener-
ally, the agent takes measures that reduce the probability of its detection
by the enemy malware. The agent is mindful of the need to exercise
self-preservation and self-defense.
It is assumed here that the agent resides on the computer where it was
originally installed by a human controller or by an authorized process.
We envision the possibility that an agent may move itself (or a replica of
itself) to another computer. However, it is assumed that such propagation
occurs only under exceptional and well-specified conditions and only within
a friendly network—from one friendly computer to another friendly com-
puter. This situation brings to mind the controversy about “good viruses.”
Such viruses were proposed, criticized, and dismissed earlier (Muttik, 2016).
These criticisms do not apply here. This agent is not a virus because it only
propagates under explicit conditions within authorized and cooperative
nodes. Also, it is used only in military environments, where most of the usual
concerns are irrelevant.
3.4 AI WILL PERCEIVE THE COMPLEX WORLD
Agents will have to become useful teammates—not tools—of human war-
fighters on a highly complex and dynamic battlefield. Fig. 3.1 depicts an
environment wherein a highly dispersed team of human and intelligent
agents (including but not limited to physical robots) is attempting to access
a multitude of highly heterogeneous and uncertain information sources and
use them to form situational awareness and make decisions (Kott, Singh,
McEneaney, & Milks, 2011), while simultaneously trying to survive