Page 66 - Artificial Intelligence for the Internet of Everything
P. 66
52 Artificial Intelligence for the Internet of Everything
thing. One or more of the computers is assumed to have been compromised,
where the compromise is either established as a fact or is suspected.
Due to the contested nature of the communications environment (e.g.,
the enemy is jamming the communications or radio silence is required to
avoid detection by the enemy), communications between the thing and
other elements of the friendly force can be limited and intermittent. Under
some conditions, communications are entirely impossible.
Given the constraints on communications, conventional centralized
cyber defense is infeasible. (Here, centralized cyber defense refers to an
architecture where local sensors send cyber-relevant information to a central
location, where highly capable cyber defense systems and human analysts
detect the presence of malware and initiate corrective actions remotely.)
It is also unrealistic to expect that human warfighters in the vicinity of
the thing (if they exist) have the necessary skills or time available to perform
cyber defense functions for that thing.
Therefore cyber defense of the thing and its computing devices must be
performed by an intelligent, autonomous software agent. The agent (or mul-
tiple agents per thing) would stealthily patrol the networks, detect the enemy
agents while remaining concealed, and then destroy or degrade the enemy
malware. The agent must do so mostly autonomously, without the support
or guidance of a human expert.
In order to fight the enemy malware deceptively deployed on the
friendly thing, the agent often has to take destructive actions, such as deleting
or quarantining certain software. Such destructive actions are carefully con-
trolled by the appropriate rules of engagement and are allowed only on the
computer where the agent resides. The agent may also be the primary mech-
anism responsible for defensive cyber maneuvering (e.g., a mobbing target
defense), deception (e.g., redirection of malware to honeypots (De Gaspari,
Jajodia, Mancini, & Panico, 2016)), self-healing (e.g., Azim, Neamtiu, &
Marvel, 2014), and other such autonomous or semi-autonomous behavior
( Jajodia, Ghosh, Swarup, & Wang, 2011).
In general, the actions of the agent cannot be guaranteed to preserve the
integrity of the functions and data of friendly computers. There is a risk that
an action of the agent may “break” the friendly computer, disable important
friendly software, or corrupt or delete important data. In a military environ-
ment, this risk must be balanced against the death or destruction caused by
the enemy if an agent’s recommended action is not taken.
Provisions are made to enable a remote or local human controller to fully
observe, direct, and modify the actions of the agent. However, it is