Skip to content

Mhackiori/CANEDERLI

Repository files navigation


Logo

CANEDERLI

On The Impact of Adversarial Training and Transferability on CAN Intrusion Detection Systems
Preprint Available »

Francesco Marchiori · Mauro Conti

Table of Contents
  1. Abstract
  2. Citation
  3. Usage
  4. Models
  5. Reproducibility

🧩 Abstract

The growing integration of vehicles with external networks has led to a surge in attacks targeting their Controller Area Network (CAN) internal bus. As a countermeasure, various Intrusion Detection Systems (IDSs) have been suggested in the literature to prevent and mitigate these threats. With the increasing volume of data facilitated by the integration of Vehicle-to-Vehicle (V2V) and Vehicle-to-Infrastructure (V2I) communication networks, most of these systems rely on data-driven approaches such as Machine Learning (ML) and Deep Learning (DL) models. However, these systems are susceptible to adversarial evasion attacks. While many researchers have explored this vulnerability, their studies often involve unrealistic assumptions, lack consideration for a realistic threat model, and fail to provide effective solutions. In this paper, we present CANEDERLI (CAN Evasion Detection ResiLIence), a novel framework for securing CAN-based IDSs. Our system considers a realistic threat model and addresses the impact of adversarial attacks on DL-based detection systems. Our findings highlight strong transferability properties among diverse attack methodologies by considering multiple state-of-the-art attacks and model architectures. We analyze the impact of adversarial training in addressing this threat and propose an adaptive online adversarial training technique outclassing traditional fine-tuning methodologies. By making our framework publicly available, we aid practitioners and researchers in assessing the resilience of IDSs to a varied adversarial landscape.

(back to top)

🗣️ Citation

Please, cite this work when referring to CANEDERLI.

@article{marchiori2024canederli,
  title={CANEDERLI: On The Impact of Adversarial Training and Transferability on CAN Intrusion Detection Systems},
  author={Marchiori, Francesco and Conti, Mauro},
  journal={arXiv preprint arXiv:2404.04648},
  year={2024}
}

(back to top)

⚙️ Usage

To train the models, generate the attacks, and evaluate adversarial transferability and adversarial training, start by cloning the repository.

git clone https://github.com/Mhackiori/CANEDERLI.git
cd CANEDERLI

Then, install the required Python packages by running the following command.

pip install -r requirements.txt

(back to top)

🤖 Models

The utils directory contains several Python files that are referenced in all the scripts for baseline evaluation and attacks. In particular, details on the models architectures can be found in the models.py script, and functions for training and evaluation can be found in the helpers.py script. Seed, training details and other parameters can be changed in the params.py file.

(back to top)

🔁 Reproducibility

The first step is process the dataset. The original dataset can be found here, but we already include part of the pre-processed data in .csv format. By running preprocessing.py, two datasets for each vehicle will be created in the dataset folder (one for binary classification, one for multiclass classification). Models can then be trained by running the baseline.py script. The program will automatically use the already generated models in the models folder, but it is possible to retrain them by deleting the .pth files in the directory. The script will also evaluate the models in the same dataset (taking into account the train/test split). Once the models are trained, it is possible to generate the attacks by running attacks.py. This will create several .csv datasets that will be stored in the attacks folder (here gitignored due to storage constraints). The evaluation.py is used for evaluating each model on each adversarial dataset. This will generate the .csv files in the results folder. As we used a seed in all our scripts, results should be (almost exactly) the same. We only notice a few discrepancies when training/testing the models on different hardware, i.e., CPU or GPU. As such, if a GPU is accessible when trying to reproduce the results, its usage is recommended. Finally, adverarial training in both fine-tuning and online modes are taken care of in the adversarial-training.py script, which also automatically performs its evaluation (stored in the results folder). The results.ipynb notebook then organize the results as shown in the paper.

(back to top)