Paper
18 February 2022 Enhancing the transferability of adversarial black-box attacks
Author Affiliations +
Proceedings Volume 12162, International Conference on High Performance Computing and Communication (HPCCE 2021); 1216202 (2022) https://doi.org/10.1117/12.2627924
Event: 2021 International Conference on High Performance Computing and Communication, 2021, Guangzhou, China
Abstract
Deep neural networks are especially vulnerable to adversarial example, which could mislead classifiers by adding imperceptible perturbations. While previous researches can effec- tively generate adversarial example in the white-box environment, it is a challenge to produce threatening adversarial example in the black-box environment till now, where attackers only have access to obtain the predicts of models to inputs. To conquer the problem, a feasible solution is harnessing the transferability of adversarial examples and the property makes adversarial examples can successfully attack multiple models simultaneously. Therefore, this paper explores the way to enhance transfer- ability of adversarial examples and then propose a Nadam- based iterative algorithm (NAI-FGM). NAI-FGM can achieve better convergence and effectively correct the deviation so as to boost the transferability of adversarial examples. To validate the effectiveness and transferability of adversarial examples generated by our proposed NAI-FGM, this study conducts the attacks on various single models and ensemble models on open Cifar-10 and Cifar-100. Experiment results exhibit the superiority of NAIFGM that achieves higher transferability than state-of- the-art methods on average against black-box models.
© (2022) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Yuxiao Gao "Enhancing the transferability of adversarial black-box attacks", Proc. SPIE 12162, International Conference on High Performance Computing and Communication (HPCCE 2021), 1216202 (18 February 2022); https://doi.org/10.1117/12.2627924
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Neural networks

Data modeling

Defense and security

Artificial intelligence

Network security

Performance modeling

Statistical modeling

Back to Top