This is official code for the USENIX Security 24 paper:
MAGIC: Detecting Advanced Persistent Threats via Masked Graph Representation Learning
In this paper, we introduce MAGIC, a novel and flexible self-supervised approach for multi-granularity APT detection. MAGIC leverages masked graph representation learning to model benign system entities and behaviors, performing efficient deep feature extraction and structure abstraction on provenance graphs. By ferreting out anomalous system behaviors via outlier detection methods, MAGIC is able to perform both system entity level and batched log level detection. MAGIC is specially designed to handle concept drift with a model adaption mechanism and successfully applies to universal conditions and detection scenarios.
conda install -c dglteam/label/th24_cu121 dgl
conda install packaging
conda install gdown
conda install pandas
conda install scikit-learn
# Evaluation for pretrained-model
(clinical_multimodal) ubuntu@ip-10-0-15-201:~/workspace/MAGIC$ python eval.py --dataset trace
AUC: 0.9998153809050484
F1: 0.995736028228789
PRECISION: 0.9917104937282383
RECALL: 0.9997943776987928
TN: 615452
FN: 14
TP: 68072
FP: 569
#Test_AUC: 0.9998±0.0000
# Re-training
(clinical_multimodal) ubuntu@ip-10-0-15-201:~/workspace/MAGIC$ time python train.py --dataset trace
Epoch 49 | train_loss: 0.1469: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [03:53<00:00, 4.67s/it]
real 3m57.884s
user 3m35.311s
sys 0m22.623s
# evaluation for re-trained model
(clinical_multimodal) ubuntu@ip-10-0-15-201:~/workspace/MAGIC$ time python eval.py --dataset trace
Accuracy: 0.9982736618686843
AUC: 0.9999333852470714
F1: 0.991399963090059
PRECISION: 0.9831453371654704
RECALL: 0.9997943776987928
TN: 614854
FN: 14
TP: 68072
FP: 1167
#Test_AUC: 0.9999±0.0000
real 47m55.508s
user 190m8.796s
sys 0m4.776s
Download the dataset and ground truth, and then unzip the dataset to the data/theia
folder.
python trace_parser.py --dataset theia 413.85s user 45.03s system 77% cpu 9:54.83 total
python train.py --dataset theia 182.05s user 15.45s system 98% cpu 3:19.70 total
(py310) ➜ MAGIC git:(main) ✗ time python eval.py --dataset theia
Accuracy: 0.9986860691423484
AUC: 0.9987634942492317
F1: 0.9911331201326604
PRECISION: 0.9824602250679084
RECALL: 0.9999605039693511
TN: 318996
FN: 1
TP: 25318
FP: 452
#Test_AUC: 0.9988±0.0000
python eval.py --dataset theia 3042.40s user 12.92s system 392% cpu 12:57.63 total
python trace_parser.py --dataset cadets 317.55s user 15.70s system 99% cpu 5:33.33 total
python train.py --dataset cadets 223.21s user 23.28s system 99% cpu 4:06.98 total
(py310) ➜ MAGIC git:(main) ✗ time python eval.py --dataset cadets
Accuracy: 0.993778925058725
AUC: 0.9935882910556456
F1: 0.9202326244312197
PRECISION: 0.8538974017321785
RECALL: 0.9977424879339872
TN: 342134
FN: 29
TP: 12817
FP: 2193
#Test_AUC: 0.9936±0.0000
python eval.py --dataset cadets 3147.44s user 14.26s system 388% cpu 13:34.76 total
We use two public datasets for evaluation on batched log level detection: StreamSpot
and Unicorn Wget
.
We use the DARPA Transparent Computing Engagement 3 sub-datasets E3-Trace
, E3-THEIA
and E3-CADETS
for evaluation on system entity level detection.
Due to the enormous size of these datasets, we include our pre-processed datasets in the data/
folder. In each sub-directory under the .data
folder, there is a .zip
file. You need to unzip these .zip
files into one graphs.pkl
for each dataset.
To pre-process these datasets from scratch, do as the follows:
- StreamSpot Dataset
- Download and unzip
all.tar.gz
from StreamSpot, which includes a single data fileall.tsv
. Copyall.tsv
todata/streamspot
. - Go to directory
utils
and runstreamspot_parser.py
. This will result in 600 graph data files in the JSON format. - During training and evaluation, function
load_batch_level_dataset
inutils/loaddata.py
will automatically read and label these graphs and store them into the compressed data archivegraphs.pkl
for efficient data loading.
- Download and unzip
- Unicorn Wget Dataset
- Download and unzip
attack_baseline.tar.gz
andbenign.tar.gz
from Wget. Copy all.log
files intodata/wget/raw/
. Ignore contents inbase
andstream
. - Go to directory
utils
and runwget_parser.py
. This will result in 150 graph data files in the JSON format. - During training and evaluation, function
load_batch_level_dataset
inutils/loaddata.py
will automatically read and label these graphs and store them into the compressed data archivegraphs.pkl
for efficient data loading.
- Download and unzip
- DARPA TC E3 Sub-datasets
- Go to DAPRA TC Engagement 3 data release.
- Download and unzip
ta1-trace-e3-official-1.json.tar.gz
intodata/trace/
. - Download and unzip
ta1-theia-e3-official-6r.json.tar.gz
intodata/theia/
. - Download and unzip
ta1-cadets-e3-official-2.json.tar.gz
andta1-cadets-e3-official.json.tar.gz
intodata/cadets/
. - Do not delete log files that are not directly used for training and test purpose (e.g. ta1-theia-e3-official-6r.4-7.json). These files provide entity definitions for subsequent event records, including definitions for malicious entities.
- Go to directory
utils
and runtrace_parser.py
with argument--dataset
. Valid choices aretrace
,theia
, andcadets
. - MAGIC is evaluated on the DARPA TC datasets using the ThreaTrace label. Go to ThreaTrace, download the .txt groundtruth files from the folder "groundtruth" and put them into the corresponding dataset folder of MAGIC. For example,
theia.txt
intodata/theia/theia.txt
.
Meanwhile, we elaborated an alternative labeling methodology on the DARPA TC datasets in our paper(Appendix G). We also provided the corresponding ground truth labels in the same appendix section for sub-datasets E3-Trace, E3-THEIA and E3-CADETS.
This is a guildline on reproducing MAGIC's evaluations. There are three options: Quick Evaluation, Standard Evaluation and Training from Scratch.
Make sure you have MAGIC's parameters saved in checkpoints/
and KNN distances saved in eval_result/
. Then execute eval.py
and assign the evaluation dataset using the following command:
python eval.py --dataset *your_dataset*
Standard evaluation trains the detection module from scratch, so the KNN distances saved in eval_result/
need to be removed. MAGIC's parameters in checkpoints/
are still needed. Execute eval.py
with the same command to run standard evaluation:
python eval.py --dataset *your_dataset*
Namely, everything, including MAGIC's graph representation module and its detection module, are going to be trained from raw data. Remove model parameters from checkpoints/
and saved KNN distances from eval_result/
and execute train.py
to train the graph representation module.
python train.py --dataset *your_dataset*
Then execute eval.py
the same as in standard evaluation:
python eval.py --dataset *your_dataset*
For more running options, please refer to utils/config.py
If you make advantage of MAGIC in your research, please cite the following in your manuscript:
@inproceedings{jia2024magic,
title = {MAGIC: Detecting Advanced Persistent Threats via Masked Graph Representation Learning},
author = {Zian Jia and
Yun Xiong and
Yuhong Nan and
Yao Zhang and
Jinjing Zhao and
Mi Wen},
booktitle = {33rd USENIX Security Symposium, USENIX Security 2024},
year = {2024},
}