Skip to content

TarikToha/mmanomaly

Repository files navigation

mmAnomaly: Leveraging Visual Context for Robust Anomaly Detection in the Non-Visual World with mmWave Radar


This repository contains the official code for the SenSys 2026 paper "mmAnomaly: Leveraging Visual Context for Robust Anomaly Detection in the Non-Visual World with mmWave Radar."

Setup

0. Requirements: Python 3.10, NVIDIA H100 GPU.

1. Install Environment:

conda create -n mmanomaly python=3.10 -y
conda activate mmanomaly
pip install -r requirements.txt

2. Compile Cython:

python cythons/depth/setup.py build_ext --inplace
python cythons/project/setup.py build_ext --inplace

3. Download Dataset:

  • Download the dataset from here.
  • Organize them under the dataset folder with the following structure:
dataset
└── weapon
    ├── <dataset_name>
        ├── <capture_id>
            ├── color.avi                               # RGB video frames
            ├── color_config.json                       # RGB sensor calibration
            ├── depth.zst                               # Compressed depth maps
            ├── depth_config.json                       # Depth sensor calibration
            ├── azi_fft_<capture_id>_<frame_id>.jpg     # Radar azimuth FFT

4. Download Checkpoints:

  • Download the four pretrained model checkpoints from here.

  • Place them under the checkpoints folder.

Usage

Run the inference pipeline from the repository root:

python inference.py

The script loads checkpoints for clothing classification, environmental context, cross-modal generation, and anomaly detection. Upon completion, it generates per-stage timing metrics and per-class precision/recall statistics.

Citation

If you find this work useful in your research, please consider citing:

@inproceedings{toha2026mmanomaly,
    author = {Tarik Reza Toha and Shao-Jung (Louie) Lu and Mahathir Monjur and Shahriar Nirjon}, 
    title = "{mmAnomaly: Leveraging Visual Context for Robust Anomaly Detection in the Non-Visual World with mmWave Radar}", 
    booktitle = {Proceedings of the 24th ACM/IEEE International Conference on Embedded Artificial Intelligence and Sensing Systems (SenSys)}, 
    year = {2026},
    month = {May},
    publisher = {ACM},
    address = {Saint-Malo, France},
    url = {https://doi.org/10.1145/3774906.3802773}
}

Acknowledgements

We would like to thank the following projects for their great work that inspired us: img2img-turbo, ViT-pytorch.

Contact

For any questions, please contact us at: ttoha12@cs.unc.edu

About

[SenSys 2026] mmAnomaly Official Implementation

Topics

Resources

License

Stars

Watchers

Forks

Contributors