Skip to content
#

adversarial-attacks

Here are 875 public repositories matching this topic...

This project evaluates the robustness of image classification models against adversarial attacks using two key metrics: Adversarial Distance and CLEVER. The study employs variants of the WideResNet model, including a standard and a corruption-trained robust model, trained on the CIFAR-10 dataset. Key insights reveal that the CLEVER Score serves as

  • Updated Jun 1, 2024
  • Jupyter Notebook

Code, model and data for our paper: K. Tsigos, E. Apostolidis, S. Baxevanakis, S. Papadopoulos, V. Mezaris, "Towards Quantitative Evaluation of Explainable AI Methods for Deepfake Detection", Proc. ACM Int. Workshop on Multimedia AI against Disinformation (MAD’24) at the ACM Int. Conf. on Multimedia Retrieval (ICMR’24), Thailand, June 2024.

  • Updated May 29, 2024
  • Python

Improve this page

Add a description, image, and links to the adversarial-attacks topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the adversarial-attacks topic, visit your repo's landing page and select "manage topics."

Learn more