Skip to content

《Adversarial Sample Attacks and Defenses based on LSTM-ED in Industrial Control Systems》

Notifications You must be signed in to change notification settings

10431210664/LSTM-FWED

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 

Repository files navigation

LSTM-FWED

《Adversarial Sample Attacks and Defenses based on LSTM-ED in Industrial Control Systems》 Official Implementation of "Adversarial Sample Attacks and Defenses based on LSTM-ED in Industrial Control Systems", Liu, Yaru and Xu, Lijuan and Yang, Shumian (Computers & Security 2024)

The challenge faced by industrial control systems is that they are vulnerable to adversarial sample attacks. In the ICS field, the challenge with adversarial sample attacks is that the adversarial samples generated by the attack do not conform to protocol specifications. The challenge of adversarial sample defense is that it is difficult to design a defense model without information about the adversarial samples. To tackle these challenges, we propose an adversarial sample attack and defense method based on Long Short-Term Memory Networks based Encoder-Decoder (LSTM-ED). The objectives are to address challenges of adversarial samples not conforming to protocol specifications and physical meaning, inefficient generation of adversarial samples, and the difficulty of designing a defense model without information about adversarial samples. Our adversarial sample attack efficiently generates samples conforming to protocol specifications and physical meaning by adding perturbation values to sensors and actuators, while complying with feature constraints. Subsequently, we introduce an LSTM-ED Feature Weight defense method (LSTM-FWED) designed without explicit adversarial sample information. In LSTM-FWED, we normalize reconstruction errors across different features to prevent anomaly scores from being influenced by poorly predicted features, thereby ensuring robust defense results. We validate the effectiveness of our approach on a real-world critical infrastructure testbed. The proposed adversarial sample attack reduces the precision of the LSTM-ED model by an average of 66.26%, with a maximum adversarial sample generation time of 18 seconds, significantly improving attack efficiency. Furthermore, in comprehensive experiments, LSTM-FWED demonstrates an average AUC improvement of 21.83% compared to state-of-the-art anomaly detection baseline methods.

Files: main.py

attack.py - adversarial attacks

defend.py - defenses against attacks

LSTM_Autoencoder.py - LSTM_Autoencoder model and training

variables.py - hyperparameters

evaluator.py - Evaluation metric value

dataset.py - dataset.

Citation: @article{liu2024adversarial, title={Adversarial Sample Attacks and Defenses based on LSTM-ED in Industrial Control Systems}, author={Liu, Yaru and Xu, Lijuan and Yang, Shumian and Zhao, Dawei and Li, Xin}, journal={Computers & Security}, pages={103750}, year={2024}, publisher={Elsevier} }

About

《Adversarial Sample Attacks and Defenses based on LSTM-ED in Industrial Control Systems》

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages