Track1: Multi-Camera People Tracking
The official repository for 7th NVIDIA AI City Challenge
We run on 2 NVIDIA A6000 GPUs.
- Linux or macOS
- Python 3.7+ (Python 3.8 in our envs)
- PyTorch 1.9+ (1.11.0 in our envs)
- CUDA 10.2+ (CUDA 11.3 in our envs)
- mmcv-full==1.7.1 (MMCV)
- Step #1. Create environment (recommend environment)
conda env create --file environment.yaml
conda activate scit
- Step #2. Install packages
sh setup.sh
Object Detection
-
Train on Dataset
We use mmdetection's YOLOX-X model.
Follow mmdetection's guideline for training.
-
Pretrained
Pretrained person detection model weights on NVIDIA Omniverse Dataset from 2023 AI City Challenge Track1.
Keypoint Detection
-
Pretrained
We directly use yolov7-pose-estimation's pretrained pose estimation model.
You can download pretrained pose estimation model from their git page.
Trajectory Prediction
-
Train on Dataset
We use Social-Implicit model.
-
Pretrained
Pretrained trajectory prediction model weights on NVIDIA Omniverse Dataset from 2023 AI City Challenge Track1.
- Step #1. Single-Camera Tracking.
sh run_scmt.sh
- Step #2. Multi-Camera Tracking (Association). Here is the homography_list.pkl.
sh run_mcmt.sh
@InProceedings{Jeon_2023_CVPR,
author = {Jeon, Yuntae and Tran, Dai Quoc and Park, Minsoo and Park, Seunghee},
title = {Leveraging Future Trajectory Prediction for Multi-Camera People Tracking},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2023},
pages = {5398-5407}
}