You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi xiankai, thanks for the sharing codes. I have some questions about the training process.
I noticed that your shared code 'train_iteration_conf.py' firstly loads the pre-trained model named 'deeplab_davis_12_0.pth'. Have you pre-trained this model from saliency datasets(MSRA10K and DUT), which means that the saliency datasets are used both in pre-training and fine-tuning (on DAVIS16)?
For the fucntion 'get_1x_lr_params' and 'get_10x_lr_params', why the setting is different by using single GPU and multiple GPUs?
Thank you
The text was updated successfully, but these errors were encountered:
Hi, thanks for your interest in our work.
Q1, we use saliency datasets for both pre-training and finetuning.
Q2, The reason is that if you use multiple GPUs, the keys of the saved model have a pre-fix: 'module'. If you train the model on a single GPU, the saved model has no 'module'.
Hi xiankai, thanks for the sharing codes. I have some questions about the training process.
I noticed that your shared code 'train_iteration_conf.py' firstly loads the pre-trained model named 'deeplab_davis_12_0.pth'. Have you pre-trained this model from saliency datasets(MSRA10K and DUT), which means that the saliency datasets are used both in pre-training and fine-tuning (on DAVIS16)?
For the fucntion 'get_1x_lr_params' and 'get_10x_lr_params', why the setting is different by using single GPU and multiple GPUs?
Thank you
The text was updated successfully, but these errors were encountered: