CD-MTA is a framework for performing multi-targeted adversarial attacks across different datasets and models, even when the target classes are unknown or unavailable during training.
Paper link: Coming soon.
This project supports both training and evaluation across multiple datasets and classifiers.
The following datasets are implemented in ./cd_mta/lib/dataset/:
SUPPORTED_DATASET: Dict[str, Type[MetaLoader]] = {
"imagenet": ImageNet,
"imagenet_timm": ImageNet,
"imagenet_robust": ImageNet,
"imagenet_incv3": ImageNet_IncV3,
"imagenet_timm_incv3": ImageNet,
"cifar10": Cifar10,
"cifar100": Cifar100,
"stl10": Stl10,
"svhn": Svhn,
"cub": CUB,
"air": Air,
"stcar": STCar,
}Their corresponding image classifiers are implemented in ./cd_mta/lib/classifiers.
Poetry is a tool for dependency management and virtual environments in Python. To install CD-MTA using Poetry, follow these steps:
# Install Poetry (if not already installed)
curl -sSL https://install.python-poetry.org | python3 -
# Clone the repository
git clone https://github.com/tgoncalv/CD-MTA.git
cd CD-MTA
# Install dependencies
poetry install
# Activate the virtual environment
poetry shellAlternatively, you can install CD-MTA using Conda.
# Create a Conda environment
conda create -n CD-MTA python=3.9
# Clone the repository
git clone https://github.com/tgoncalv/CD-MTA.git
cd CD-MTA
# Activate the environment
conda activate CD-MTA
# Install dependencies
pip install -r requirements.txtTo train a model with CD-MTA, use the following command:
./train.sh \
--yaml ./cd_mta/cfg/default.yaml \
--dataset imagenet \
--source_input <path_to_source_dataset> \
--target_input <path_to_target_dataset> \
--output <output_directory> \
--gpu <gpu_id>Note:
- You can replace
--yaml cd_mta/cfg/default.yamlwith the path to your custom configuration file. - The yaml file contains by default the label flag "GAKer200" so that we only use the sames 200 classes as GAKer during training.
- The yaml file uses by default vgg19bn as the target classifier, with the loss being applied on lyaer 38. The file also give details about our default settings for resnet50 and densenet121 (see
target_classifiersin./cd_mta/cfg/default.yaml)
After training, you can evaluate the model with:
./test.sh \
--yaml <path_to_yaml> \
--dataset imagenet \
--source_input <path_to_source_dataset> \
--target_input <path_to_target_dataset> \
--test_source_label_flag "Random1000_-GAKer200" \
--test_target_label_flag "Random1000_-GAKer200" \
--output <output_directory> \
--gpu <gpu_id>Note:
- The model weights should be located in the folder specified by the
resume_pathparameter in the test YAML file. - The example above is for ImageNet. You may need to adjust
--datasetand--test_source_label_flag/--test_target_label_flagdepending on the model and dataset you wish to evaluate:- For ViT/DeiT models, use
--dataset imagenet_timm(implemented in./cd_mta/lib/model/classifier/timm/__init__.py) - For Inception V3, use
--dataset imagenet_incv3(implemented in./cd_mta/lib/model/classifier/torch_builtin/__init__.py) - For robust models, use
--dataset imagenet_robustor--dataset imagenet_timm_incv3(see supported models in thetorch_builtinandtimmclassifier folders) - For other ImageNet models, the example above applies as-is (see the list of supported models in
./cd_mta/lib/model/classifier/torch_builtin/__init__.py) - For other datasets, use
--dataset <dataset_name>, where<dataset_name>is one of:[cifar10, cifar100, stl10, svhn, cub, air, stcar] - When specifying
--dataset <...>, thetest.shscript automatically adapts thetest_source_label_flagandtest_target_label_flag(see the script for details).
- For ViT/DeiT models, use
- The default YAML file (
./cd_mta/cfg/default.yaml) defines the available classifiers for each dataset (see thetest_classifiersargument in the YAML file).
CD-MTA/
├── cd-mta/
│ ├── cfg/ # Configuration files for training and evaluation
│ │ └── default.yaml # Default training and evaluation configuration
│ ├── core/ # Core functions to run the code
│ ├── lib/ # Main modules
│ │ ├── dataset/ # Dataset loading and processing
│ │ ├── engine/ # Training, testing, and sampling functions
│ │ ├── model/ # Main framework and model definitions
│ │ └── utils/ # Utility functions and helper tools
│ ├── pretrained/ # Pretrained models or weights
│ │ ├── dcl/ # DCL weights for fine-grained datasets
│ │ │ └── README.md # Installation guide for DCL weights
│ │ ├── cd-mta/ # Pretrained weights for CD-MTA
│ │ └── <...> # Note: No manual installation is needed for coarse-grained datasets (e.g., CIFAR-10)
│ └── src/ # Source code for training and evaluation
│ ├── train.py # Training script
│ └── test.py # Evaluation script
├── train.sh # Shell script for training
├── test.sh # Shell script for evaluation
├── README.md # Project documentation
├── pyproject.toml # Poetry project configuration
└── requirements.txt # Dependencies for Conda or pip
For detailed explanations of each parameter, refer to the comments within the YAML configuration files. Contributions, bug reports, and feature requests are welcome. Fork the repository and submit pull requests to improve the project.