Welcome to visit Journal of Air Force Engineering University Official website!

Consultation hotline:029-84786242 RSS EMAIL-ALERT
A Method of Enhancing Adversarial Example Transferability Based on NadaMax Update and Dynamic Regularization
DOI:
CSTR:
Author:
Affiliation:

Clc Number:

TP391.4

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    To address the problem of insufficient transferability of adversarial examples and inadequate black-box attack capabilities in deep learning models, this study designs an iterative fast gradient method based on the NadaMax optimizer (NM-FGSM). This method integrates the advantages of Nesterov Accelerated Gradient and the Adamax optimizer, improving the accuracy of gradient updates through adaptive learning rates and lookahead momentum vectors. Additionally, dynamic regularization is introduced to enhance the convexity of the problem, optimizing algorithm stability and specificity. The experimental results demonstrate that the NM-FGSM is prior to the existing methods under conditions of various attack strategies, particularly in advanced defense scenarios, attack success rate increases by 4%~8%. The dynamically regularized loss function enhances the cross-model transferability of adversarial examples, thereby further improving black-box attack effectiveness. Finally, points out the way forward for the NMFGSM algorithm and defense measures, providing a new insight into the security research of deep learning models.

    Reference
    Related
    Cited by
Get Citation
Related Videos

Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:
  • Revised:
  • Adopted:
  • Online: June 04,2025
  • Published:
Article QR Code