This project focuses on the development and analysis of Poison Attack and Evasion Attack strategies, crucial tools for understanding and enhancing the robustness of neural networks against adversarial threats.
Repository file structure:
|-- Basic_pipline.py
|-- Poison_Attack.py
|-- Envasion_Attack.py
|-- model.py
|-- utils.py
|-- README.md
Your image dataset structure should be like:
|-- Image_directory
|-- label1
|-- images ...
|-- label2
|-- images ...
|-- label3
|-- images ...
...
|-- labeln
There are three piplines can use for machine learning training and testing:
- Basic_pipline.py: Basic model without adding any Adversarial Attack method.
- Poison_Attack.py: Based on Basic_pipline and add Poison Attack measure.
- Envasion_Attack.py: Based on Basic_pipline and add Evasion Attack measure.
To run those, you can type in terminal:
# you can select one to run
$ python Basic_pipline.py Poison_Attack.py Envasion_Attack.py
This type of attack occurs in the inference phase of the machine learning model. The attacker modifies the input data by adding slight differences that are not easily detectable by humans and inputs it into the AI system, causing the system to misclassify it.
This attack does not affect the training phase. The attacker's goal is to make the machine learning model output incorrect results, while the input appears normal to human observers. This attack is commonly used in the field of image processing.
These attacks occur in the training phase of the machine learning model. The attacker injects malicious data (such as backdoors) into the training dataset, causing the model to make incorrect predictions when encountering certain types of inputs in the future.
We can achieve accuracy of above 90% in 10 epochs, using those pipline.
If there's any question, contact: B11132009@mail.ntust.edu.tw