Skip to content

shreyansh26/ConvNeXt-Adversarial-Examples

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Adversarial examples generation for ConvNeXt

Open In Colab

This project is a Pytorch implementation of @stanislavfort's project.

The notebook looks at generating adversarial images to "fool" the ConvNeXt model's image classification capabilities. ConvNeXt came out earlier this year from Meta AI.

The FGSM (Fast Gradient Sign Method) is a great algorithm to attack models in a white-box fashion with the goal of misclassification. Noise is added to the input image (not randomly) but in a manner such that the direction is the same as the gradient of the cost function with respect to the data.

Since this notebook is just the implementation - If you want to know more about FGSM, you may refer these -

  1. https://www.tensorflow.org/tutorials/generative/adversarial_fgsm
  2. https://pytorch.org/tutorials/beginner/fgsm_tutorial.html
  3. https://arxiv.org/abs/1412.6572

The following figure summarizes the goal of this notebook -