Presentation + Paper
21 April 2020 Efficacy of defending deep neural networks against adversarial attacks with randomization
Author Affiliations +
Abstract
Adversarial machine learning is concerned with the study of vulnerabilities of machine learning techniques to adversarial attacks and potential defenses against such attacks. Intrinsic vulnerabilities, incongruous and often suboptimal defenses are both rooted in the standard assumption upon which machine learning methods have been developed. The assumption that data are independent and identically distributed (i.i.d) samples implies training data are representative of the general population. Thus, learning models that fit the training data accurately would perform well on the test data from the rest of the population. Violations of the i.i.d assumption characterize the challenges of detecting and defending against adversarial attacks. For an informed adversary, the most effective attack strategy is to transform malicious data so that they appear indistinguishable from legitimate data to the target model. Current development in adversarial machine learning suggests that the adversary can easily gain the upper hand on this arms race since the adversary only needs to make a local breakthrough against the stationary target while the target model struggles to extend its predictive power to the general population, including the corrupted data. The fundamental cause of stagnation in effective defense against adversarial attacks suggests developing a moving target defense for a machine learning model for greater robustness. We investigate the feasibility and effectiveness of employing randomization in creating moving target defense for deep neural network learning models. Randomness is introduced through randomizing the input and adding small random noise to the learned parameters. Extensive empirical study is performed, covering different attack strategies and defense/detection techniques against adversarial attacks.
Conference Presentation
© (2020) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Yan Zhou, Murat Kantarcioglu, and Bowei Xi "Efficacy of defending deep neural networks against adversarial attacks with randomization", Proc. SPIE 11413, Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 114130Q (21 April 2020); https://doi.org/10.1117/12.2558747
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Data modeling

Defense and security

Statistical modeling

Machine learning

Neural networks

Convolution

Statistical analysis

RELATED CONTENT


Back to Top