Skip to content

Gait Recognition in the Wild: A Large-scale Benchmark and NAS-based Baseline

Notifications You must be signed in to change notification settings

XiandaGuo/SPOSGait

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

31 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Gait Recognition in the Wild: A Large-scale Benchmark and NAS-based Baseline

Gait Recognition in the Wild: A Large-scale Benchmark and NAS-based Baseline

Xianda Guo, Zheng Zhu, Tian Yang, BeiBei Lin, Junjie Huang, Jiankang Deng, Guan Huang, Jie Zhou, Jiwen Lu.

News

  • [2024/6/24] Training and evaluation code release.
  • [2024/1] Paper released on arXiv.

Getting Started

0. Prepare datasets

We provide the following tutorials for your reference:

1. SupernetTraining

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -u -m torch.distributed.launch --nproc_per_node=8 opengait/main.py --cfgs configs/sposgait/sposgait_large_GREW_supertraining_triplet.yaml --phase train
  • python -m torch.distributed.launch DDP launch instruction.
  • --nproc_per_node The number of gpus to use, and it must equal the length of CUDA_VISIBLE_DEVICES.
  • --cfgs The path to config file.
  • --phase Specified as train.
  • --log_to_file If specified, the terminal log will be written on disk simultaneously.

You can run commands in train.sh for training different models.

2. Search

多卡搜索
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -u -m torch.distributed.launch --nproc_per_node=8  opengait/search.py --cfgs ./configs/sposgait/sposgait_large_GREW_supertraining_triplet.yaml --max-epochs 20

3. ReTrain

Train a model by

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -u -m torch.distributed.launch --nproc_per_node=8 opengait/main.py --cfgs ./configs/sposgait/retrain/sposgait_large_GREW-train20000id_retrain.yaml --phase train
  • python -m torch.distributed.launch DDP launch instruction.
  • --nproc_per_node The number of gpus to use, and it must equal the length of CUDA_VISIBLE_DEVICES.
  • --cfgs The path to config file.
  • --phase Specified as train.
  • --log_to_file If specified, the terminal log will be written on disk simultaneously.

You can run commands in train.sh for training different models.

4. Test

Evaluate the trained model by

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8 opengait/main.py --cfgs ./configs/sposgait/retrain/sposgait_large_GREW-train20000id_retrain.yaml --phase test
  • --phase Specified as test.
  • --iter Specify a iteration checkpoint.

You can run commands in test.sh for training different models.

Participants must package the submission.csv for submission using zip xxx.zip $CSV_PATH and then upload it to codalab.

Calculate_flops_and_params

CUDA_VISIBLE_DEVICES=0 python -u -m torch.distributed.launch --nproc_per_node=1 opengait/calculate_flops_and_params.py --cfgs configs/sposgait/retrain/sposgait_large_GREW-train20000id_retrain.yaml

Acknowledgement

Citation

If this work is helpful for your research, please consider citing the following BibTeX entries.

@inproceedings{zhu2021gait,
  title={Gait recognition in the wild: A benchmark},
  author={Zhu, Zheng and Guo, Xianda and Yang, Tian and Huang, Junjie and Deng, Jiankang and Huang, Guan and Du, Dalong and Lu, Jiwen and Zhou, Jie},
  booktitle={Proceedings of the IEEE/CVF international conference on computer vision},
  pages={14789--14799},
  year={2021}
}
@article{guo2022gait,
  title={Gait Recognition in the Wild: A Large-scale Benchmark and NAS-based Baseline},
  author={Guo, Xianda and Zhu, Zheng and Yang, Tian and Lin, Beibei and Huang, Junjie and Deng, Jiankang and Huang, Guan and Zhou, Jie and Lu, Jiwen},
  journal={arXiv e-prints},
  pages={arXiv--2205},
  year={2022}
}

Note: This code is only used for academic purposes, people cannot use this code for anything that might be considered commercial use.

About

Gait Recognition in the Wild: A Large-scale Benchmark and NAS-based Baseline

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published