TED++ (Submanifold-Aware Backdoor Detection) is a robust backdoor detection method for deep learning models, designed to work with minimal validation data. It extends the original TED approach by introducing locally adaptive ranking, improving detection accuracy and robustness.
This implementation builds upon:
We thank the authors of these repositories for their foundational work.
- TED: Topological Evolution Dynamics for backdoor detection.
- TED++: Locally Adaptive Ranking for enhanced robustness.
- Integrates with a wide range of backdoor attacks and defenses.
- Modular and extensible for research purposes.
-
Install PyTorch (with CUDA if available):
See PyTorch installation guide. -
Install other dependencies:
pip install -r requirement.txt
- CIFAR10, GTSRB, MNIST, and TinyImageNet: Downloaded automatically.
- ImageNet: Download manually from Kaggle and set the path in
config.py(imagenet_dir). - Initialize clean/validation data (required before any experiment):
python create_clean_set.py -dataset=<DATASET> -clean_budget <N><DATASET>:cifar10,gtsrb,mnist,tinyimagenet,ember,imagenet<N>:2000forcifar10,gtsrb,mnist,tinyimagenet;5000forimagenet
Both TED and TED++ are implemented in other_defenses_tool_box/TED.py and other_defenses_tool_box/TEDPLUS.py, and are run via the unified interface in other_defense.py.
python other_defense.py -defense=TED -dataset=cifar10 -poison_type=badnet -poison_rate=0.01python other_defense.py -defense=TEDPLUS -dataset=cifar10 -poison_type=badnet -poison_rate=0.01Common arguments:
-dataset: Dataset name (cifar10,gtsrb,mnist,tinyimagenet,imagenet, etc.)-poison_type: Type of backdoor attack (badnet,blend, etc.)-poison_rate: Poisoning rate (e.g.,0.01)-cover_rate,-alpha,-test_alpha,-trigger, etc. (seeutils/default_args.pyfor all options)-validation_per_class: Number of validation samples per class (default: 20)-num_test_samples: Number of test samples (default: 50)-class_ratio: (TED++ only) Ratio of missing classes (default: 0)
For full options:
python other_defense.py -h- Prepare clean/validation data (see above).
- Create a poisoned training set (if needed):
python create_poisoned_set.py -dataset=cifar10 -poison_type=badnet -poison_rate=0.01
- Train a model on the poisoned set:
python train_on_poisoned_set.py -dataset=cifar10 -poison_type=badnet -poison_rate=0.01
- Run TED or TED++:
python other_defense.py -defense=TED -dataset=cifar10 -poison_type=badnet -poison_rate=0.01 # or python other_defense.py -defense=TEDPLUS -dataset=cifar10 -poison_type=badnet -poison_rate=0.01
If you use TED++ in your research, please cite our paper:
@article{le2025ted++,
title={TED++: Submanifold-Aware Backdoor Detection via Layerwise Tubular-Neighbourhood Screening},
author={Le, Nam and Zhang, Leo Yu and Liao, Kewen and Pan, Shirui and Luo, Wei},
journal={arXiv preprint arXiv:2510.14299},
year={2025}
}
And acknowledge:
