Page 53 - Artificial Intelligence for the Internet of Everything
P. 53

40    Artificial Intelligence for the Internet of Everything


          This formulation uses robust optimization over the expected loss for worst-
          case adversarial perturbation for the training data. The internal maximization
          corresponds to finding adversarial examples and can be approximated using
          IGSM (Kurakin et al., 2016). This approach falls into a category of defenses
          that use adversarial training (Shaham, Yamada, & Negahban, 2015). Instead of
          training with only adversarial examples, using a mixture of normal and
          adversarial examples in the training set has been found to be more effective
          (Moosavi-Dezfooli et al., 2016; Szegedy et al., 2013). Another alternative is
          to augment the learning objective with a regularizer term corresponding to
          the adversarial inputs (Goodfellow et al., 2014). More recently, logit pairing
          has been shown to be an effective approximation of adversarial regulariza-
          tion (Kannan, Kurakin, & Goodfellow, 2018).
             Another category of defense against adversarial attacks on neural net-
          works are defensive distillation methods (Papernot, McDaniel, Jha, et al.,
          2016). These methods modify the training process of neural networks to
          make it difficult to launch gradient-based attacks directly on the network.
          The key idea is to use distillation training technique (Hinton, Vinyals, &
          Dean, 2015) and hide the gradient between the presoftmax layer and the
          softmax outputs. Carlini and Wagner (2016) found methods to break this
          defense by changing the loss function, calculating gradient directly from pre-
          softmax layer and transferring attack from an easy-to-attack network to a
          distilled network. More recently, Athalye, Carlini, and Wagner (2018)
          showed that it is possible to bypass several defenses proposed for the
          white-box setting (Fig. 2.3).


          2.6 SUMMARY AND CONCLUSION

          This chapter provided an overview of classical and modern statistical-
          learning theory, and of how numerical optimization can be used to solve
          the corresponding mathematical problems with an emphasis on UQ. We
          discussed how ML and artificial intelligence are the fundamental algorithmic
          building blocks of IoBT to address the decision-making problem that arises
          in the underlying control, communication, and networking within the
          IoBT infrastructure in addition to the inevitable part of almost all
          military-specific applications developed over IoBT. We studied UQ for
          ML and artificial intelligence within the context of IoBT, which is critical
          to provide an accurate measure of error over the output in addition to pre-
          cise output in military settings. We studied how to quantify and minimize
          the uncertainty with respect to training an ML algorithm in Section 2.4,
   48   49   50   51   52   53   54   55   56   57   58