Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some question about the training process #19

Closed
EnQing626 opened this issue Mar 13, 2020 · 1 comment
Closed

Some question about the training process #19

EnQing626 opened this issue Mar 13, 2020 · 1 comment

Comments

@EnQing626
Copy link

Hi xiankai, thanks for the sharing codes. I have some questions about the training process.

  1. I noticed that your shared code 'train_iteration_conf.py' firstly loads the pre-trained model named 'deeplab_davis_12_0.pth'. Have you pre-trained this model from saliency datasets(MSRA10K and DUT), which means that the saliency datasets are used both in pre-training and fine-tuning (on DAVIS16)?

  2. For the fucntion 'get_1x_lr_params' and 'get_10x_lr_params', why the setting is different by using single GPU and multiple GPUs?

Thank you

@carrierlxk
Copy link
Owner

Hi, thanks for your interest in our work.
Q1, we use saliency datasets for both pre-training and finetuning.
Q2, The reason is that if you use multiple GPUs, the keys of the saved model have a pre-fix: 'module'. If you train the model on a single GPU, the saved model has no 'module'.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants