mmAnomaly: Leveraging Visual Context for Robust Anomaly Detection in the Non-Visual World with mmWave Radar
This repository contains the official code for the SenSys 2026 paper "mmAnomaly: Leveraging Visual Context for Robust Anomaly Detection in the Non-Visual World with mmWave Radar."
0. Requirements: Python 3.10, NVIDIA H100 GPU.
1. Install Environment:
conda create -n mmanomaly python=3.10 -y
conda activate mmanomaly
pip install -r requirements.txt2. Compile Cython:
python cythons/depth/setup.py build_ext --inplace
python cythons/project/setup.py build_ext --inplace3. Download Dataset:
- Download the dataset from here.
- Organize them under the
datasetfolder with the following structure:
dataset
└── weapon
├── <dataset_name>
├── <capture_id>
├── color.avi # RGB video frames
├── color_config.json # RGB sensor calibration
├── depth.zst # Compressed depth maps
├── depth_config.json # Depth sensor calibration
├── azi_fft_<capture_id>_<frame_id>.jpg # Radar azimuth FFT
4. Download Checkpoints:
-
Download the four pretrained model checkpoints from here.
-
Place them under the
checkpointsfolder.
Run the inference pipeline from the repository root:
python inference.pyThe script loads checkpoints for clothing classification, environmental context, cross-modal generation, and anomaly detection. Upon completion, it generates per-stage timing metrics and per-class precision/recall statistics.
If you find this work useful in your research, please consider citing:
@inproceedings{toha2026mmanomaly,
author = {Tarik Reza Toha and Shao-Jung (Louie) Lu and Mahathir Monjur and Shahriar Nirjon},
title = "{mmAnomaly: Leveraging Visual Context for Robust Anomaly Detection in the Non-Visual World with mmWave Radar}",
booktitle = {Proceedings of the 24th ACM/IEEE International Conference on Embedded Artificial Intelligence and Sensing Systems (SenSys)},
year = {2026},
month = {May},
publisher = {ACM},
address = {Saint-Malo, France},
url = {https://doi.org/10.1145/3774906.3802773}
}
We would like to thank the following projects for their great work that inspired us: img2img-turbo, ViT-pytorch.
For any questions, please contact us at: ttoha12@cs.unc.edu