Skip to content

A diffusion-based denoising approach to mitigate online adversarial image attacks and an FFT-based detector

License

Notifications You must be signed in to change notification settings

gaudelbijay/diffusion-defender

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

52 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

imageAttackDetectionAndDenoising

A diffusion-based denoising approach to mitigate online adversarial image attacks and an FFT-based detector

How to use this code

This code has been tested on Ubuntu 18.04 LTS and ROS melodic with Python 3.6.9 and Python 3.10.0 and GCC 8.4.0.

Open up 3 terminals

Create a catkin workspace and download the repositories as follows

# in terminal 1

mkdir -p ~/defender_ws/src && cd ~/defender_ws/src

git clone https://github.com/r-bahrami/iros_image_attack.git
git clone https://github.com/gaudelbijay/attack-defender.git
git clone https://github.com/ros-perception/vision_opencv -b melodic

cd ..

The attack-defender model requires Python 3.10.0. As such, set up a virtual environment for installing the requirements in requirements.txt.

# in terminal 2

python3.10 -m venv diffusion_env
source diffusion_env/bin/activate
cd <path to defender>/src/attack-defender
pip install -r requirements.txt

cd src/diffusion_model/ && mkdir results

Download the trained parameters of diffusion-based denoiser, named model-100.pt from this link and copy into the results folder.

Build and train the RL-based attacker model

# in terminal 1
# cd ~/defender_ws/

catkin_make -DPYTHON_EXECUTABLE=/usr/bin/python3

source  devel/setup.bash

Set up the AirSim and simulation environment settings (Environment 2) as described in iros_image_attack package.

For integration of the attack-defender, in model_yolo.py, uncomment the line 71 and comment line 70. This enforces the simulation to use the denoised images, and thus the RL-baed attacker is trained while the attack-defender is in the loop.

Run the ./Blocks.sh environment in terminal 3.

Run the attack-defender,

# in terminal 2
# source diffusion_env/bin/activate

cd ~/defender_ws/src/attack-defender/src/diffusion_model

python node_denoising.py

Then train the attacker using

# in terminal 1
roslaunch iros_image_attack train.launch

After 10 episodes, the code will stop and the trained model will be saved. Afterward, different experiments can be run as described below.

runtime

# in terminal 2
# source diffusion_env/bin/activate
# cd ~/defender_ws/src/attack-defender/src/diffusion_model

python node_denoising.py
# in terminal 1
roslaunch iros_image_attack run.launch

TO-DO

  • compare the code with this commit to figure out the encoding bug.

Integration with attack models.

To integrate attack-defender model with the RL-based attacker in iros_image_attack, we only added one line in the node_yolo.py module to subscribe to the denoised images. Also, we enforced the image encoding encoding="bgr8" for the published attacked images in the node_image_attacker.py module.

For integration to take place, in model_yolo.py, uncomment the line 71 and comment line 70. This enables the simulation to run using the denoised images.

  • integrate the FFT code for online attack detection.
  • update the FFT code for online attack detection in 2D space of Height x Width where we have a better parameterized threshold for detection.
  • add the training details

Acknowledgement

We used denoising-diffusion-pytorch repo for the implementation of the training of our diffusion model of the attack-defender.

We used iros_image_attack repo for the reinforcement-based attacker.

About

A diffusion-based denoising approach to mitigate online adversarial image attacks and an FFT-based detector

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages