This repository contains an implementation of an evaluation system to run experiments. An experiment measures success rate of adversarial machine learning (AML) evasion attacks in network intrusion detection systems (NIDS). The system allows to evaluate classifiers trained on network data sets against adversarial black-box evasion attacks.
Experiment options
Option | Description |
---|---|
Datasets | Aposemat IoT-23 contains IoT network traffic UNSW-NB15 contains traditional network intrusion data |
Classfiers | Keras deep neural network (DNN) A tree-based ensemble learner XGBoost (XGB) |
Defenses | For DNN, the defense is Adversarial Training For XGB, the defense is RobustTrees |
Attacks | Zeroth-Order Optimization (ZOO) HopSkipJump attack (HSJ) |
Source code organization
.github
Actions workflow filesaml
Evaluation system implementation source codeconfig
Experiment configuration filesdata
Preprocessed datasets ready for experimentsresult
Referential result for comparisonRobustTrees
(submodule) XGBoost enhanced with adversarial defense
The easiest way to run experiments is with Docker. The docker build assumes amd64-compatible host. Otherwise, build from source.
git clone https://github.com/aucad/aml-networks.git && cd aml-networks
docker build -t aml-networks .
docker run -v $(pwd)/output:/usr/src/aml-networks/output -it --rm aml-networks /bin/bash
The runtime estimates are for 8-core 32 GB RAM Linux (Ubuntu 20.04) machine, not using docker; actual times may vary.
make query
[24h] This experiment uses the full testing set and repeats experiments with different model query limits. By default, the max query limits are: 2, 5, default (varies by attack).
make sample
[90 min] Run experiments using limited input size by randomly sampling the testing set. By default, the sample size is 50 and sampling is repeated 3 times. The result is the average of 3 runs.
make plots
[1 min] Plot results of the two previous experiments. The plot data source is output/
directory.
There are three execution modes:
experiment - Performs adversarial attack experiments plot - Generate tables from captured experiment results validate - Check a dataset for network protocol correctness
Custom experiments can be defined by constructing appropriate commands.
python3 -m aml {experiment|plot|validate} [ARGS]
To see available options for experiments, run:
python3 -m aml experiment --help
To see available options for plotting results, run:
python3 -m aml plot --help
To see available options for the validator, run:
python3 -m aml validate --help
These steps explain how to run experiments from source natively on host machine. You should also follow these steps, if you want to prepare a development environment and make code changes.
Step 0: Environment setup
-
🐍 Required Python environment: 3.8 or 3.9
-
⚠️ Submodule This repository has a submodule. Clone it including the submodule:git clone --recurse-submodules https://github.com/aucad/aml-networks.git
This implementation is not compatible with Apple M1 machines due to underlying dependency (tensorflow-macos); and although it does not prevent most experiments, some issues may surface periodically.
Step 1: Build robust XGBoost
The evaluation uses a modified version of XGBoost classifier, enhanced with adversarial robustness property.
This classifier is not installed with the other package dependencies and must be built locally from source, i.e. the submodule RobustTrees
.
By default, you will need gcc compiler with OpenMP support.
To build robust XGBoost, run:
cd RobustTrees
make -j4
If the build causes issues, follow these instructions to build it from source.
Step 2: Install dependencies
Install required Python dependencies.
python3 -m pip install -r requirements-dev.txt
Install XGBoost from the local build location:
python3 -m pip install -e "/path/to/RobustTrees/python-package"
Step 3: (optional) Check installation
Check the xgboost runtime, the version number should be 0.72.
python3 -m pip show xgboost
Run a help command, which should produce a help prompt.
python3 -m aml
You are ready to run experiments and make code changes.