Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pre-training with DET and VID dataset #13

Closed
mukundkhanna123 opened this issue Sep 27, 2022 · 7 comments
Closed

Pre-training with DET and VID dataset #13

mukundkhanna123 opened this issue Sep 27, 2022 · 7 comments

Comments

@mukundkhanna123
Copy link

Hey, I had a question about what training methodology was used to pre-train the yolox baseline models. I added the images from DET dataset with similar classes as the VID dataset and trained yolox-s model but I was not able to replicate your results. So could you elaborate on how you pretrained the yolox model?

@YuHengsss
Copy link
Owner

YuHengsss commented Sep 28, 2022

Hey, I had a question about what training methodology was used to pre-train the yolox baseline models. I added the images from DET dataset with similar classes as the VID dataset and trained yolox-s model but I was not able to replicate your results. So could you elaborate on how you pretrained the yolox model?

Thanks for your attention. May I ask how many images you sampled from the VID dataset and can you please share the training log for debugging? We release our training annotation link for the baseline model in the README.MD file which contains 1/10 VID images and all DET images with the same classes. You can try this one and we are eager for your response.

@YuHengsss
Copy link
Owner

YuHengsss commented Sep 28, 2022

Besides, do you use the coco pre-trained model for finetuning?

@mukundkhanna123
Copy link
Author

I sampled all the images from VID dataset and all the images from the DET dataset that has the same classes. Yes I used coco pre-trained model for finetuning. I am getting an mAP score of around 0.56 when validating on the entire VID dataset.
these are the training logs
╒══════════════════╤═══════════════════════════════╕
│ keys │ values │
╞══════════════════╪═══════════════════════════════╡
│ seed │ None │
├──────────────────┼───────────────────────────────┤
│ output_dir │ './imagenet_det_vid_baseline' │
├──────────────────┼───────────────────────────────┤
│ print_interval │ 20 │
├──────────────────┼───────────────────────────────┤
│ eval_interval │ 1 │
├──────────────────┼───────────────────────────────┤
│ num_classes │ 30 │
├──────────────────┼───────────────────────────────┤
│ depth │ 0.33 │
├──────────────────┼───────────────────────────────┤
│ width │ 0.5 │
├──────────────────┼───────────────────────────────┤
│ data_num_workers │ 4 │
├──────────────────┼───────────────────────────────┤
│ input_size │ (576, 576) │
├──────────────────┼───────────────────────────────┤
│ random_size │ (18, 32) │
├──────────────────┼───────────────────────────────┤
│ train_ann │ 'vid_det_train.json' │
├──────────────────┼───────────────────────────────┤
│ val_ann │ 'vid_det_val.json' │
├──────────────────┼───────────────────────────────┤
│ degrees │ 10.0 │
├──────────────────┼───────────────────────────────┤
│ translate │ 0.1 │
├──────────────────┼───────────────────────────────┤
│ scale │ (0.1, 2) │
├──────────────────┼───────────────────────────────┤
│ mscale │ (0.8, 1.6) │
├──────────────────┼───────────────────────────────┤
│ shear │ 2.0 │
├──────────────────┼───────────────────────────────┤
│ perspective │ 0.0 │
├──────────────────┼───────────────────────────────┤
│ enable_mixup │ True │
├──────────────────┼───────────────────────────────┤
│ warmup_epochs │ 1 │
├──────────────────┼───────────────────────────────┤
│ max_epoch │ 30 │
├──────────────────┼───────────────────────────────┤
│ warmup_lr │ 0 │
├──────────────────┼───────────────────────────────┤
│ basic_lr_per_img │ 1.5625e-05 │
├──────────────────┼───────────────────────────────┤
│ scheduler │ 'yoloxwarmcos' │
├──────────────────┼───────────────────────────────┤
│ no_aug_epochs │ 2 │
├──────────────────┼───────────────────────────────┤
│ min_lr_ratio │ 0.05 │
├──────────────────┼───────────────────────────────┤
│ ema │ True │
├──────────────────┼───────────────────────────────┤
│ weight_decay │ 0.0005 │
├──────────────────┼───────────────────────────────┤
│ momentum │ 0.9 │
├──────────────────┼───────────────────────────────┤
│ exp_name │ 'yolox_s_mix_det' │
├──────────────────┼───────────────────────────────┤
│ test_size │ (576, 576) │
├──────────────────┼───────────────────────────────┤
│ test_conf │ 0.001 │
├──────────────────┼───────────────────────────────┤
│ nmsthre │ 0.6 │
╘══════════════════╧═══════════════════════════════╛

@YuHengsss
Copy link
Owner

sampled all the images from VID dat

Please follow our instructions and try it again.

@mukundkhanna123
Copy link
Author

My bad, I used 1/10 from VID dataset and all images from DET dataset, sorry that was a typo

@YuHengsss YuHengsss reopened this Sep 29, 2022
@YuHengsss
Copy link
Owner

YuHengsss commented Sep 29, 2022

My bad, I used 1/10 from VID dataset and all images from DET dataset, sorry that was a typo

Could you please use our annotations and try it again and keep all of other settings the same as this repo. We find that aug setting in this log has been changed (e.g. 30 epochs are too much for the small model, we use 7 by default). Hopefully, we can find out the reason.

@Yipzcc
Copy link

Yipzcc commented Dec 8, 2022

@mukundkhanna123 Can you give me your small dataset from VID, the VID is hard for us to download .Thank you!!!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants