An ASR (Automatic Speech Recognition) adversarial attack repository.
-
Updated
Nov 7, 2023 - Jupyter Notebook
An ASR (Automatic Speech Recognition) adversarial attack repository.
vanilla training and adversarial training in PyTorch
Evaluating CNN robustness against various adversarial attacks, including FGSM and PGD.
This work is based on enhancing the robustness of targeted classifier models against adversarial attacks. To achieve this, a convolutional autoencoder-based approach is employed that effectively counters adversarial perturbations introduced to the input images.
Adversarial Network Attacks (PGD, pixel, FGSM) Noise on MNIST Images Dataset using Python (Pytorch)
A classical or convolutional neural network model with adversarial defense protection
Implementation of PGD attack on a model trained on cifar10 dataset in TensorFlow. Also, FID between original images and generated images has been calculated.
Implementations for several white-box and black-box attacks.
"Neural Computing and Applications" Published Paper (2023)
Adversarial defense by retreaval-based methods
A classical-quantum or hybrid neural network with adversarial defense protection
Developed robust image classification models to prevent the effect of adversarial attacks
Add a description, image, and links to the pgd-attack topic page so that developers can more easily learn about it.
To associate your repository with the pgd-attack topic, visit your repo's landing page and select "manage topics."