Skip to content

Adversarial Insight ML (AIML) - Python Package for Evaluating Machine Learning Image Classification Models' Robustness Against Adversarial Attacks

License

Notifications You must be signed in to change notification settings

uoa-compsci399-s2-2023/adversarial-insight-ml

Repository files navigation

AIML Logo

Adversarial Insight ML (AIML)

PyPI version Python version License Code style Documentation

“Why does your machine lie?”

AIML, short for Adversarial Insight ML, is your go-to Python package for gauging your machine learning image classification models' resilience against adversarial attacks. With AIML, you can effortlessly test your models against a spectrum of adversarial examples, receiving crisp, insightful feedback. AIML strives for user-friendliness, making it accessible even to non-technical users.

We've meticulously selected various attack methods, including AutoProjectedGradientDescent, CarliniL0Method, CarliniL2Method, CarliniLInfMethod, DeepFool, PixelAttack, SquareAttack, and ZooAttack, to provide a strong robustness assessment.

For more details on the background and motivation behind AIML, have a read of our project report.

For more information on how to use this AIML, visit our PyPI page and documentation page.

Table of Contents

Installation

To install Adversarial Insight ML, you can use pip:

pip install adversarial-insight-ml

Usage

Here's a simple overview of AIML in use:

img_workflow.png

You can evaluate your model with the evaluate function:

from aiml.evaluation.evaluate import evaluate

evaluate(model, test_dataset)

The evaluate function has two required parameters:

  • input_model (str or model): A string of the name of the machine learning model or the machine learning model itself.
  • input_test_data (str or dataset): A string of the name of the testing dataset or the testing dataset itself.

The evaluate function has the following optional parameters:

  • input_train_data (str or dataset, optional): A string of the name of the training dataset or the training dataset itself (default is None).
  • input_shape (tuple, optional): Shape of input data (default is None).
  • clip_values (tuple, optional): Range of input data values (default is None).
  • nb_classes (int, optional): Number of classes in the dataset (default is None).
  • batch_size_attack (int, optional): Batch size for attack testing (default is 64).
  • num_threads_attack (int, optional): Number of threads for attack testing (default is 0).
  • batch_size_train (int, optional): Batch size for training data (default is 64).
  • batch_size_test (int, optional): Batch size for test data (default is 64).
  • num_workers (int, optional): Number of workers to use for data loading (default is half of the available CPU cores).
  • dry (bool, optional): When True, the code should only test one example.
  • attack_para_list (list, optional): List of parameter combinations for the attack. See Custom Attack Parameters.

See the demos in examples/ directory for usage in action:

Alternatively, we also offer the following guides:

Features

Evaluation Results

After evaluating your model with evaluate function, we provide the following insights:

  • Summary of adversarial attacks performed, found in a text file named attack_evaluation_result.txt followed by date. For example: img_result.png
  • Samples of the original and adversarial images can be found in a directory named img/ followed by date. For example:

    img_folderlist.png img_sample.png

Custom Attack Parameters

The optional attack_para_list parameter can be used to input custom attack parameters.
For example, this is how it is set by default:

AUTO_PROJECTED_CROSS_ENTROPY = [[0.03], [0.06], [0.13], [0.25]]
AUTO_PROJECTED_DIFFERENCE_LOGITS_RATIO = [[0.03], [0.06], [0.13], [0.25]]
CARLINI_L0_ATTACK = [[0], [10], [100]]
CARLINI_L2_ATTACK = [[0], [10], [100]]
CARLINI_LINF_ATTACK = [[0], [10], [100]]
DEEP_FOOL_ATTACK = [[1e-06]]
PIXEL_ATTACK = [[100]]
SQUARE_ATTACK = [[0.03], [0.06], [0.13], [0.25]]
ZOO_ATTACK = [[0], [10], [100]]

attack_para_list = [
  AUTO_PROJECTED_CROSS_ENTROPY,
  AUTO_PROJECTED_DIFFERENCE_LOGITS_RATIO,
  CARLINI_L0_ATTACK,
  CARLINI_L2_ATTACK,
  CARLINI_LINF_ATTACK,
  DEEP_FOOL_ATTACK,
  PIXEL_ATTACK,
  SQUARE_ATTACK,
  ZOO_ATTACK,
]

Contributing

Code Style
Always adhere to the PEP 8 style guide for writing Python code. Allow upto 99 characters per line as the absolute maximum. Alternatively, just use black.

Commit Messages
As per Documentation/SubmittingPatches in the Git repo, write commit messages in present tense and imperative mood, e.g., "Add feature" instead of "Added feature". Craft your messages as if you're giving orders to the codebase to change its behaviour.

Branching
We conform to a variation of the "GitHub Flow'' convention, but not strictly. For example, see the following types of branches:

  • main: This branch is always deployable and reflects the production state.
  • bugfix/*: For bug fixes.

Project Management Tool

We use the project management tool Jira to coordinate and organise tasks for our project. Our Jira board can be found here.

Technologies Used

The main technologies used for this project are:

The full list containing all dependences and sub-dependencies can be found here.

Future Plans

This project is an ongoing project with the team and client continuing to maintain the package.

Some future plans we are considering for the project are:

  • Improvements to the surrogate model.
  • Testing our package with more models and datasets.
  • Adding suggested defence tactics.
  • Adding an easier way to set the custom attack parameters.
  • Fixing other bugs.

Look at our issues tab for any ongoing changes under development.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgements

We extend our sincere appreciation to the following individuals who have been instrumental in the success of this project:

Firstly, our client Luke Chang. His invaluable guidance and insights guided us from the beginning through every phase, ensuring our work remained aligned with practical needs. This project would not have been possible without his efforts.

We'd also like to express our gratitude to Dr. Asma Shakil, who has coordinated and provided an opportunity for us to work together on this project.

Thank you for being part of this journey.

Warm regards, Team 7

Team 7 - Contacts