Abstract:To address the problem of insufficient transferability of adversarial examples and inadequate black-box attack capabilities in deep learning models, this study designs an iterative fast gradient method based on the NadaMax optimizer (NM-FGSM). This method integrates the advantages of Nesterov Accelerated Gradient and the Adamax optimizer, improving the accuracy of gradient updates through adaptive learning rates and lookahead momentum vectors. Additionally, dynamic regularization is introduced to enhance the convexity of the problem, optimizing algorithm stability and specificity. The experimental results demonstrate that the NM-FGSM is prior to the existing methods under conditions of various attack strategies, particularly in advanced defense scenarios, attack success rate increases by 4%~8%. The dynamically regularized loss function enhances the cross-model transferability of adversarial examples, thereby further improving black-box attack effectiveness. Finally, points out the way forward for the NMFGSM algorithm and defense measures, providing a new insight into the security research of deep learning models.