This is a lite version of EvilModel, which hides malware into neural network models without affecting their performances. In EvilModel 2.0, there are three methods to embed the malware into the models. The one with the least impact on model performance is "half substitution", which replaces two bytes of a parameter with malware bytes. This is the implementation of the "half substitution" in EvilModel.
Note: this is not the full code of EvilModel. It's just a demo.
ImageNet
dataset is needed to test the performance of the models. Please download ImageNet 2012 (ILSVRC2012_img_val.tar
and ILSVRC2012_devkit_t12.tar.gz
) and put them in the Imagenet
dir.
models
dir contains the offline pretrained models that saved from PyTorch repository. The model used in this demo is ResNet50
(resnet50-19c8e357.pth). The offline models in EvilModel experiments are available at Google Drive.
Malware
dir contains the samples we used in the experiments. We only use Lazurus
in this demo. Full version of Malware
is collected from theZoo and InQuest, and is available at Google Drive (unzip password: malware
).
Other implementations
They're awesome :) Implementation 1 (by @Gábor Vecsei) Implementation 2 Implementation 3
EvilModel
@INPROCEEDINGS{EvilModel2021,
author={Wang, Zhi and Liu, Chaoge and Cui, Xiang},
booktitle={2021 IEEE Symposium on Computers and Communications (ISCC)},
title={{EvilModel}: Hiding Malware Inside of Neural Network Models},
year={2021},
volume={},
number={},
pages={1-7},
doi={10.1109/ISCC53001.2021.9631425}
}
EvilModel 2.0
@ARTICLE{evilmodel2,
title = {{EvilModel} 2.0: Bringing Neural Network Models into Malware Attacks},
author = {Wang, Zhi and Liu, Chaoge and Cui, Xiang and Yin, Jie and Wang, Xutong}
journal = {Computers & Security},
volume = {120},
pages = {102807},
year = {2022},
issn = {0167-4048},
doi = {https://doi.org/10.1016/j.cose.2022.102807},
}