Paper
19 October 2022 Adversarial example generation using object detection
Liang Gu, Haitao Liu, Xin Gong, Yi An, Ning Yu, Jing Duan, Jie Duan
Author Affiliations +
Proceedings Volume 12294, 7th International Symposium on Advances in Electrical, Electronics, and Computer Engineering; 122941S (2022) https://doi.org/10.1117/12.2639693
Event: 7th International Symposium on Advances in Electrical, Electronics and Computer Engineering (ISAEECE 2022), 2022, Xishuangbanna, China
Abstract
Deep neural networks perform well in many areas, but research has shown that they are vulnerable to adversarial example attacks. Therefore, adversarial attacks can be an important method to evaluate the robustness of the model before deep neural networks are deployed. However, the attack success rate of adversarial example in the black-box case still needs to be improved. There are numerous algorithms for attacking against neural networks, but most of the attack algorithms are slow, so fast generation of adversarial examples is gradually becoming a research focus in the field of adversarial examples. To address this situation, an adversarial example generation algorithm based on target detection is proposed. First, starting from the black-box scenario, target detection is performed on the original input image in the adversarial example generation, and constraints are added; then, through the improved differential evolution algorithm, the interference degree of each individual in the population to the image is calculated, and the individual with the best attack effect is continuously selected for iteration; finally, the adversarial example is generated based on the individual with the best attack effect, so as to eliminate the overfitting in the adversarial example generation and improve the adversarial This is used to eliminate the overfitting in the generation of the adversarial examples and improve the transferability of the adversarial examples. Experiments on the CIFAR-10 dataset validate the effectiveness of the proposed algorithm. The target detection-based adversarial example generation algorithm has a higher success rate of black-box attacks compared to the original one-pixel attacks.
© (2022) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Liang Gu, Haitao Liu, Xin Gong, Yi An, Ning Yu, Jing Duan, and Jie Duan "Adversarial example generation using object detection", Proc. SPIE 12294, 7th International Symposium on Advances in Electrical, Electronics, and Computer Engineering, 122941S (19 October 2022); https://doi.org/10.1117/12.2639693
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Target detection

Detection and tracking algorithms

Neural networks

Algorithm development

Data modeling

Image processing

Evolutionary algorithms

Back to Top