Skip to content

karthikraja95/Tranferability_of_DEFENSE-GAN_to_Physical_Adversarial_Examples

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DefenseGAN Physical Transferabiltiy

Abstract

This project discuss the transferability of state of the art defense techniques for adversarial examples for deep learning systems in the physical domain. The paper explores using adversarial attacks using the Fast Gradient Sign Method (FGSM), Carlini & Wagner (CW) and DeepFool attacks to generate adversarial images that are given to the classifier as a digital and physically transformed image. Furthermore, we present novel results demonstrating the effectiveness of the stateof-the-art Defense-GAN technique to create reconstructions of images, that have undergone the physical transformation, with a significant portion of the adversarial noise filtered out. We also show, that for finer adversarial attacks, that the physical transformation itself causes a high degree of adversarial destruction, bringing to question the need for additional defenses.

Adversarial Examples

Attacks used to generated Physical Adversarila Examples

Experimentail Setup

  • NVidia Tesla P4 - 8GB DDR5 GPU Memory
  • 13 GB of RAM
  • 2 vCPUs
  • Logitech c922x Webcam

Classifer Architecture

Classifer A Architecture

Defense GAN

Defnese GAN Setup

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published