This is the Part 2 of the BUPT_MCPRL_T2 in AICityChallenge2022 Track2. This is the code of OSG in the ablation study.
Paper Link: TBD
The code of Part 1 can be found here: https://github.com/dyhBUPT/OMG
- Download the repository
git clone https://github.com/binging512/AICity2022Track2-OSG.git
- Install the environment
pip install -r requirements.txt
- Split the annotated dataset into 5-fold cross validation dataset
python misc/split_trainval.py
- Modify your data path or checkpoints path in
config.py
- Train the OSG model
CUDA_VISIBLE_DEVICES=0 HF_DATASETS_OFFLINE=1 TRANSFORMERS_OFFLINE=1 python train.py --config configs/Swin+GRU+CLIP+NLP_AUG+COLOR.yaml --valnum 4
The training log will be written in outputs/METHOD_NAME/METHOD_NAME_fold_N/debug.log
- Test the OSG model
CUDA_VISIBLE_DEVICES=0 HF_DATASETS_OFFLINE=1 TRANSFORMERS_OFFLINE=1 python test.py --config configs/Swin+GRU+CLIP+NLP_AUG+COLOR.yaml --valnum 4
The Natural Language Augment code can be found in nlp
.
- We use fanyi.baidu.com to perform the English-Chinese-English backtranslation
python nlp/nlp_fix_1.py
- We use spaCy to conduct the dependency parsing on the text descriptions
python nlp/nlp_spacy_2.py
- Then we generate the Color and the Type of the vehicles using voting strategy.
python nlp/nlp_merge_3.py
- Also, we tried to decouple the appearance and the motion information in the annotation.
python nlp/nlp_decouple_4.py
The IDs of the vehicle are all from the annotation for the AICityChallenge2022 Track1 (No code). And we rearrange all the vehicle IDs begin with 0. The rearrange code can be found in misc/add_id.py
.