Skip to content

Lin-Kayla/subjectivity-sketch-reid

Repository files navigation

Beyond Domain Gap: Exploiting Subjectivity in Sketch-Based Person Retrieval

Kejun Lin, Zhixiang Wang, Zheng Wang, Yinqiang Zheng, Shin'ichi Satoh

Accepted to ACMMM2023

arXiv CC BY-NC-SA 4.0

teaser

This is the official respository of paper "Beyond Domain Gap: Exploiting Subjectivity in Sketch-Based Person Retrieval".

Dataset

Our proposed MaSk1K (Short for Market-Sketch-1K) dataset is available here.

Download the dataset and Market1501 attributes from here, and put it into your <data_path>.

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Update: 1) There's a minor typo in the dataset statistics in the paper--style F has 497 sketches for training and 493 for testing; 2) All our experiments are performed using a selected subset of photos as in the Google Drive above. If you wish to experiment on the entire photo set, please download the Market-1501 dataset.

Guide For Market-Sketch-1K

architecture

requirements

download the necessary dependencies using cmd.

pip install -r requirements.txt

preprocess

python preprocess.py --data_path=<data_path> --train_style <train_style> [--train_mq]
  • <data_path> should be replaced with the path to your data.
  • <train_style> refers to the styles you want to include in your training set. You can use any combination of styles A-F, such as B, AC, CEF, and so on.
  • [--train_mq] argument is optional and can be used to enable multi-query during training.

start training

python train.py --train_style <train_style> --test_style <test_style> [--train_mq] [--test_mq]
  • <train_style> and <test_style> should be replaced with the styles you want to use for your training and testing sets, respectively. Just like in the preprocessing step, you can use any combination of styles A-F.
  • [--train_mq] argument is used for enabling multi-query during training, and [--test_mq] serves a similar purpose during testing.

Evaluation

python test.py --train_style <train_style> --test_style <test_style> --resume <model_filename> [--test-mq]
  • <train_style> should be replaced with the styles you used for your training.
  • <test_style> should be replaced with the styles you want to use for your testing.
  • <model_filename> should be the filename of your trained model.
  • [--test_mq] argument is used for enabling multi-query during testing.

Acknowledgements

Our code was build on the amazing codebase Cross-modal-Re-ID and CMAlign and CLIP.

Citation

If you find our work helpful, please consider citing our work using the following bibtex.

@inproceedings{lin2023subjectivity,
  title={Beyond Domain Gap: Exploiting Subjectivity in Sketch-Based Person Retrieval},
  author={Lin, Kejun  and  Wang, Zhixiang  and  Wang, Zheng  and  Zheng, Yinqiang and  Satoh, Shin'ichi},
  booktitle={ACM Multimedia},
  year={2023},
}

About

[ACMMM2023] Beyond Domain Gap: Exploiting Subjectivity in Sketch-Based Person Retrieval

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages