Page 57 - Artificial Intelligence for the Internet of Everything
P. 57
Uncertainty Quantification in Internet of Battlefield Things 43
Fawzi, A., Moosavi-Dezfooli, S. -M., Frossard, P., & Soatto, S. (2017). Classification regions of
deep neural networks. arXiv preprint arXiv:1705.09552.
Foster, L., Waagen, A., Aijaz, N., Hurley, M., Luis, A., Rinsky, J., et al. (2009). Stable and
efficient Gaussian process calculations. Journal of Machine Learning Research, 10(Apr),
857–882.
Girolami, M., & Calderhead, B. (2011). Riemann manifold Langevin and Hamiltonian
Monte Carlo methods. Journal of the Royal Statistical Society: Series B (Statistical Methodol-
ogy), 73(2), 123–214.
Goldfarb, D. (1970). A family of variable metric updates derived by variational means. Math-
ematics of Computation, 24(109), 23–26.
Goodfellow, I. J., Shlens, J., & Szegedy, C. (2014). Explaining and harnessing adversarial exam-
ples. arXiv preprint arXiv:1412.6572.
Graves, A., Mohamed, A. -R., & Hinton, G. (2013). Speech recognition with deep recurrent
neural networks. In 2013 IEEE international conference on acoustics, speech and signal proces-
sing (ICASSP) (pp. 6645–6649).
Hastie, T., Tibshirani, R., & Friedman, J. (2009). The elements of statistical learning. New York:
Springer-Verlag.
Haykin, S. (1998). Neural networks: A comprehensive foundation (2nd ed). NJ: Prentice Hall.
Hinton, G., Vinyals, O., & Dean, J. (2015). Distilling the knowledge in a neural network. arXiv
preprint arXiv:1503.02531.
Hu, C., Pan, W., & Kwok, J. T. (2009). Accelerated gradient methods for stochastic opti-
mization and online learning. In Advances in neural information processing systems
(pp. 781–789).
Jaderberg, M., Simonyan, K., Vedaldi, A., & Zisserman, A. (2016). Reading text in the wild
with convolutional neural networks. International Journal of Computer Vision, 116(1),
1–20.
Johnson, R., & Zhang, T. (2013). Accelerating stochastic gradient descent using predictive
variance reduction. In Advances in neural information processing systems (pp. 315–323).
Jolliffe, I. T. (1986). Principal component analysis. New York: Springer-Verlag.
Kannan, H., Kurakin, A., & Goodfellow, I. (2018). Adversarial logit pairing. arXiv preprint
arXiv:1803.06373.
Kocijan, J. (2016). Modelling and control of dynamic systems using Gaussian process models. New
York: Springer.
Koppel, A., Fink, J., Warnell, G., Stump, E., & Ribeiro, A. (2016). Online learning for char-
acterizing unknown environments in ground robotic vehicle models. In 2016 IEEE/RSJ
international conference on intelligent robots and systems (IROS) (pp. 626–633).
Kos, J., Fischer, I., & Song, D. (2017). Adversarial examples for generative models. arXiv preprint
arXiv:1702.06832.
Kott, A., Swami, A., & West, B. J. (2016). The internet of battle things. Computer, 49(12),
70–75. https://doi.org/10.1109/MC.2016.355.
Krige, D. G. (1951). A statistical approach to some basic mine valuation problems on the
witwatersrand. Journal of the Southern African Institute of Mining and Metallurgy, 52(6),
119–139.
Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep con-
volutional neural networks. In Advances in neural information processing systems
(pp. 1097–1105).
Kurakin, A., Goodfellow, I., & Bengio, S. (2016). Adversarial examples in the physical world.
arXiv preprint arXiv:1607.02533.
Lakshminarayanan, B., Pritzel, A., & Blundell, C. (2017). Simple and scalable predictive
uncertainty estimation using deep ensembles. NIPS, pp. 1–12 (Supplemental
material, p. 13).
Lee, J. D., Simchowitz, M., Jordan, M. I., & Recht, B. (2016). Gradient descent only con-
verges to minimizers. In Conference on learning theory (pp. 1246–1257).