Skip to content

Simulation of an Intrusion Detection System using Federated Learning

Notifications You must be signed in to change notification settings

ClementSafon/IDS_FL_Simulation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

39 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Simulation of an Intrusion Detection System using Federated Learning

This repo is largely influenced by the work of Yann Busnel and Léo Lavaur.

For now, we use UNSW-NB15 dataset in the folder ./dataset/UNSW-B15 available here.

💻 - Installation

Requirements

  • Python 3.9 (or higher?)
  • pip
  • virtualenv (if not included in python by default)

Installation

git clone [url]
cd IDS-Federated-Learning
python3.9 -m venv venv
source venv/bin/activate
pip install -r requirements.txt

⚙️ - Usage

Preprocess Data

if you want to reprocess the UNSW-NB15 dataset, you can use the following command:

python src/preprocess.py --output hard_0.05_custom --hard --normal_frac 0.05

Args :

  • --output : name of the output folder
  • --hard : use hard to restrict the dataset even more. Attack categories removed in hard mode are are : Dos. (In addtion to the normally removed : Analysis, Backdoors, Shellcode, Worms)
  • --normal_frac : fraction of normal traffic in the dataset

Set Config

All the simulation work with a config file in yaml format. You can find an example called 'config_example.yaml' in the root folder.

In this file, you can set all the parameters of the simulation. Please see the config_example.yaml file for more details.

Start the Simulation

To start the simulation, you can then use the following command:

python main.py --config config_example.yaml

Note :

If you want to specify the output folder you can use --output argument. You can also specify to force the overwrite of the output folder with -f, and force the re-extraction of the dataset with -r (Otherwise, the data folder will be used if it exists).

The Results

The results of the simulation are stored in the output folder. (By default, the output folder is called with the config file name)

You can find the following files:

  • confusion_matrix.png : confusion matrix of the final global model (on the test set)
  • metrics.json : metrics of the model for each class (on the test set)
  • history.json : history of the training metrics for each round
  • history.png : plot of the history of the training metrics for each round
  • model.keras : model saved after the training

⚠️ Warning

If you have the default Flwr version, please make sure that this argument in the main_fe.py file is commented:

metric_evaluation_target=METRIC_EVALUATION_TARGET,

Other Information

To clean the project, you can use make clean.

In the Makefile, you can find some other commands to bypass the config file, and lunch the src scripts directly. They may not work as expected.

This repo is still under development.

About

Simulation of an Intrusion Detection System using Federated Learning

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •