This project studies adversarial robustness of image classification models on satellite imagery (EuroSat dataset). The goal is to implement and compare different CNN architectures, then analyze their vulnerability to adversarial attacks.
Adversarial_examples/
βββ .gitignore # Git ignore rules
βββ requirements.txt # Python dependencies
βββ pipfreeze.txt # Full pip freeze output for reproducibility
βββ config.py # Configuration settings (hyperparameters, paths, etc.)
βββ main.py # Main entry point for experiments
βββ download_data.sh # Script to download EuroSat dataset
βββ 0_create_datasets_and_baseline.ipynb # Creates Train/Test sets and Baseline model
βββ 1_Ilyas.py # Runs Ilyas et al. experiment
βββ 2_Madry.ipynb # Runs Madry et al. experiment (PGD adversarial training)
βββ 3_Xie.ipynb # Runs Xie et al. experiment
βββ Report.pdf # Project report
βββ README.md # Project documentation
β
βββ attacks/ # Adversarial attack implementations
β βββ __init__.py
β βββ base_attack.py # Base class for all attacks
β βββ fgsm.py # FGSM attack implementation
β βββ pgd.py # PGD attack implementation (Madry et al.)
β βββ test_attacks.py # Test scripts for attacks
β βββ utils.py # Attack-specific utilities
β βββ configs/ # Attack configuration files
β βββ attack_config.json # JSON configs for different attacks
β
βββ data_loader/ # Data handling utilities
β βββ __init__.py
β βββ dataset.py # EuroSat dataset class and data loaders
β
βββ datasets/ # Dataset analysis and preprocessing
β βββ EuroSAT_RGB/ # EuroSat split in train/set (result of 0_create_datasets_and_baseline.ipynb)
β βββ feature_analysis.py # Feature analysis of datasets
β
βββ models/ # Neural network model implementations
β βββ __init__.py
β βββ simple_cnn.py # Custom CNN implementation (baseline)
β βββ resnet.py # Standard ResNet implementation
β βββ resnet_with_advprop.py # ResNet with AdvProp adversarial training
β
βββ train/ # Training frameworks
β βββ __init__.py
β βββ trainer.py # Standard model trainer
β βββ advprop_trainer.py # Trainer for AdvProp adversarial training
β
βββ utils/ # Helper functions and utilities
β βββ utils.py # General utility functions
β βββ visualization.py # Plotting and visualization utilities
β βββ evaluation_logger.py # Logger for evaluation metrics
β βββ result_collector.py # Collector for experimental results
β
βββ outputs/ # Generated outputs and results
β βββ models/ # Saved model checkpoints
β βββ plots/ # Training/validation plots
β βββ evaluation_logs/ # Saved evaluation results
β βββ results/ # Experimental results
β βββ analysis/ # Analysis outputs
β
βββ data/EuroSAT_RGB/ # EuroSat dataset directory (when downloaded)
βββ AnnualCrop/
βββ Forest/
βββ HerbaceousVegetation/
βββ Highway/
βββ Industrial/
βββ Pasture/
βββ PermanentCrop/
βββ Residential/
βββ River/
βββ SeaLake/
-
Adversarial Examples Are Not Bugs, They Are Features
Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry.
In Advances in Neural Information Processing Systems (NeurIPS), 2019. -
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu.
In International Conference on Learning Representations (ICLR), 2018. -
Adversarial Examples Improve Image Recognition
Cihang Xie, Mingxing Tan, Boqing Gong, Jiang Wang, Alan Yuille, and Quoc V. Le.
In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020. -
EuroSAT: A Novel Dataset and Deep Learning Benchmark for Land Use and Land Cover Classification
Patrick Helber, Benjamin Bischke, Andreas Dengel, Damian Borth.
In IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2019. -
Explaining and harnessing adversarial examples
Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy.
In International Conference on Learning Representations (ICLR), 2015.
pip install -r requirements.txt# Download data to workspace (change the workspace if you do not use sspcloud)
bash download_data.sh# The dataset should already be in:
# data/EuroSAT_RGB/ (27000 images, 10 classes)
# Verify dataset structure
ls -la data/EuroSAT_RGB/
# Should show 10 folders: AnnualCrop, Forest, HerbaceousVegetation, etc.- Run the notebook
0_create_datasets_and_baseline.ipynbto create the reproductible clean train and test sets, train the baseline model, and create the attacked test set.
To follow our framework:
- Run the notebook
1_Ilyas.pyto reproduce the experiment on Ilyas et al. - Run the notebook
2_Madry.ipynbto reproduce the experiment on Madry et al. - Run the notebook
3_Xie.pyto reproduce the experiment on Xie et al.
The file 1_Ilyas.py will :
- load the baseline.
- generate adversarial sets (using methods : FGSM method, PGD, random noise)
- train a model on each of adversarial train set.
- evaluate those models on adversarial test set and clean test set.
- produce metrics and plots.
The file 2_Madry.ipynb will :
- train a model with the min-max optimization approach and save the model
- test on the clean test set and the attacked test set
- produce metrics and plots
The file 3_Xie.ipynb will :
- train a model with the approach of Xie et al.
- test on the clean test set and the attacked test set
- produce metrics and plots
| Command | Description | Example |
|---|---|---|
--model |
Choose model architecture | --model simple_cnn |
--seed |
Seed to reproduce | --seed 42 |
--epochs |
Number of training epochs | --epochs 20 |
--batch-size |
Batch size for training | --batch-size 64 |
--lr |
Learning rate | --lr 0.0001 |
--patience |
Early stopping | --patience 10 |
--data-path-train |
Path to train dataset | --data-path-train datasetsEuroSAT_RGB/train_clean |
--data-path-eval |
Path to test dataset | --data-path-eval datasetsEuroSAT_RGB/test_clean |
--train |
Train the model | --train |
--evaluate |
Evaluate on test set | --evaluate |
--visualize |
Visualize predictions | --visualize |
--save-model-path |
Path to save the model | --save-model-path outputs/model |
--save-plots-path |
Path to save plots | --save-plots-path outputs/plots |
--advprop |
Use the Advprop procedure during training | --advprop |
--epsilon |
Epsilon for PGD during an Advprop training | --epsilon 0.2 |
--advprop-iterations |
Number of iterations for PGD during an Advprop training | --advprop-iterations 10 |
--madry |
To use a minmax optimisation training | --madry None |
# Step 1: Train Simple CNN on RGB images
python main.py --model simple_cnn --train --epochs 15 --data-path-train datasets/EuroSAT_RGB/train_clean
# Step 2: Evaluate and visualize
python main.py --model simple_cnn --evaluate --visualize --data-path-eval datasets/EuroSAT_RGB/test_clean --save-model-path outputs/model --save-plots-path outputs/plots# Just evaluate an existing model
python main.py --model simple_cnn --evaluate --data-path-eval data/EuroSAT_RGB/test_clean --save-plots-path outputs/plots --save-model-path outputs/modelEdit config.py to modify:
- Image size (default: 64x64)
- Batch size (default: 32)
- Learning rate (default: 0.001)
- Data augmentation
- Model save paths
Important: Update DATA_PATH in config.py to match your dataset location:
DATA_PATH = 'data/EuroSAT_RGB' # For RGB images
# or
DATA_PATH = 'data/EuroSAT_MS' # For multi-spectral (advanced)The script automatically creates:
- Model checkpoints in
outputs/models/ - Training plots in
outputs/plots/ - Console logs of training progress
- Images: 27,000
- Classes: 10
- Size: 64x64 pixels
- Format: RGB (3 channels)
- Classes: AnnualCrop, Forest, HerbaceousVegetation, Highway, Industrial, Pasture, PermanentCrop, Residential, River, SeaLake
# Check what's in your data folder
ls -la data/
# If you have different structure, update config.py
# Or use --data-path argument
python main.py --train --data-path ./data/EuroSAT_RGB# Reduce batch size
python main.py --batch-size 16
# Reduce image size (edit config.py)
# Change IMG_SIZE = 32# Make sure you're in the project directory
cd ~/work/Adversarial_examples
# Install dependencies
pip install -r requirements.txt