Page 58 - Artificial Intelligence for the Internet of Everything
P. 58
44 Artificial Intelligence for the Internet of Everything
Lian, X., Huang, Y., Li, Y., & Liu, J. (2015). Asynchronous parallel stochastic gradient for
nonconvex optimization. In Advances in neural information processing systems
(pp. 2737–2745).
Lillicrap, T. P., Hunt, J. J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., et al. (2015). Continuous
control with deep reinforcement learning. arXiv preprint arXiv:1509.02971.
Ma, Y. -A., Chen, T., & Fox, E. (2015). A complete recipe for stochastic gradient MCMC.
In Advances in neural information processing systems (pp. 2917–2925).
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., & Vladu, A. (2017). Towards deep learning
models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083.
Mairal, J., Bach, F., & Ponce, J. (2012). Task-driven dictionary learning. IEEE Transactions on
Pattern Analysis and Machine Intelligence, 34(4), 791–804.
Mairal, J., Bach, F., Ponce, J., Sapiro, G., & Zisserman, A. (2008). Supervised dictionary
learning. In Advances in neural information processing systems 21, Proceedings of the twenty-
second annual conference on neural information processing systems, Vancouver, British Columbia,
Canada, December 8–11, 2008 (pp. 1033–1040).
Mairal, J., Elad, M., & Sapiro, G. (2008). Sparse representation for color image restoration.
Transaction on Image Processing, 17(1), 53–69.
Mallat, S. (2008). In A wavelet tour of signal processing: The sparse way (3rd ed.). London:
Academic Press.
Mandt, S., Hoffman, M. D., & Blei, D. M. (2017). Stochastic gradient descent as approximate
Bayesian inference. The Journal of Machine Learning Research, 18(1), 4873–4907.
McIntire, M., Ratner, D., & Ermon, S. (2016). Sparse Gaussian processes for Bayesian opti-
mization. In Proceedings of the thirty-second conference on uncertainty in artificial intelligence
(pp. 517–526).
Mokhtari, A., G€urb€uzbalaban, M., & Ribeiro, A. (2016). Surpassing gradient descent provably:
A cyclic incremental method with linear convergence rate. arXiv preprint arXiv:1611.00347.
Mokhtari, A., Koppel, A., Scutari, G., & Ribeiro, A. (2017). Large-scale nonconvex stochas-
tic optimization by doubly stochastic successive convex approximation. In 2017 IEEE
international conference on acoustics, speech and signal processing (ICASSP) (pp. 4701–4705).
Mokhtari, A., & Ribeiro, A. (2015). Global convergence of online limited memory BFGS.
Journal of Machine Learning Research, 16, 3151–3181.
Moosavi-Dezfooli, S. -M., Fawzi, A., & Frossard, P. (2016). Deepfool: A simple and accurate
method to fool deep neural networks. In Proceedings of the IEEE conference on computer
vision and pattern recognition (pp. 2574–2582).
Murphy, K. (2012). Machine learning: A probabilistic perspective. Cambridge, MA: MIT Press.
Nelder, J. A., & Baker, R. J. (1972). Generalized linear models. Encyclopedia of Statistical
Sciences.
Nemirovski, A., Juditsky, A., Lan, G., & Shapiro, A. (2009). Robust stochastic approxima-
tion approach to stochastic programming. SIAM Journal on optimization, 19(4),
1574–1609.
Nesterov, Y. (2004). Introductory lectures on convex optimization. New York: Springer US.
Neyshabur, B., Tomioka, R., Salakhutdinov, R., & Srebro, N. (2017). Geometry of optimi-
zation and implicit regularization in deep learning. arXiv preprint arXiv:1705.03071.
Papernot, N., Carlini, N., Goodfellow, I., Feinman, R., Faghri, F., Matyasko, A., et al.
(2016). cleverhans v2. 0.0: An adversarial machine learning library. arXiv preprint
arXiv:1610.00768.
Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z. B., & Swami, A. (2017).
Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM
on Asia conference on computer and communications security (pp. 506–519).
Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z. B., & Swami, A. (2016). The
limitations of deep learning in adversarial settings. In 2016 IEEE European symposium on
security and privacy (EuroS&P) (pp. 372–387).