With the explosive rise of internet usage and the development of web applications across various platforms, ensuring network and system security has become a critical concern. Networks and web services are particularly susceptible to targeted attacks, as hackers and intruders persistently attempt to gain unauthorized access. In this context, the integration of artificial intelligence (AI) has emerged as a crucial tool for detecting intrusions and constructing effective Intrusion Detection Systems (IDSs) to counter cyber-attacks and malicious activities. IDSs developed using machine learning (ML) and deep learning (DL) techniques have demonstrated their efficacy and accuracy in detecting network attacks, offering high-performance capabilities and machine-centric solutions. Nevertheless, mainstream adoption, confidence and trust in these systems have been greatly impeded by the fact that many ML/DL implementations tend to be “black boxes,” and thus lacking human interpretability, transparency, explainability, and logical reasoning in their prediction outputs. This limitation has prompted questions about the responsibility and comprehension of AI-driven intrusion detection systems. In this paper, we propose some novel frameworks that incorporates XAI techniques to overcome the challenges of limited interpretability in ML/DL-based IDSs. We focus on the effectiveness of eight proposed ML/DL architectures based on Random Forest Classifier (RFC), Random Forest with Recursive Feature Elimination (RFC + RFE), Decision Tree Classifier (DTC), Extended Tree Classifier (ETC), Extreme Gradient Boosting (Xgboost), Multiple Perceptron Layer (MLP), Convolutional Neural Network (CNN), Long Short-term Memory (LSTM), and Autoencoder algorithms in network security solutions. The models are applied to scan network traffic and identify, and report intrusions in real time based on the traits extracted from the UNSW-NB15 dataset. To explain how a decision is made by the model and to add expansibility at every stage, we develop multiple XAI models such as LIME, SHAP, ElI5 on top of our architectures. The generated explanations provide quantifiable insights into the influential factors and their respective impact on cyber-attack predictions
-
Notifications
You must be signed in to change notification settings - Fork 0
fallenAmber/-XAI-Sec-
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
Explainable AI powered Transparent Network Intrusion Detection System to Enhance Cybersecurity in Real Time.
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published