Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About training detial #1

Closed
Limingxing00 opened this issue Aug 9, 2021 · 2 comments
Closed

About training detial #1

Limingxing00 opened this issue Aug 9, 2021 · 2 comments

Comments

@Limingxing00
Copy link

Hi, Zongxin,

Thank you for the nice work. I am concerned about the training detail.

  1. How many models do you use in Table 1? Davis valid/test share one and Youtube 18/19 share one?
  2. You say "For main training, the training steps are 100,000 for YouTube-VOS or 50,000 for DAVIS" and what is the iteration of total training? Because I think "training steps" is a middle step of adjusting the learning rate.
@z-x-yang
Copy link
Owner

z-x-yang commented Aug 9, 2021

  1. Yes. Davis valid/test share one and Youtube 18/19 share one.

  2. Pretraining on static images: 100,000 steps (about 1.1 days for AOT-L with 4 Tesla V100 GPU).
    Main training on YouTube-VOS: 100,000 steps (about 1.1 days for AOT-L with 4 Tesla V100 GPU).
    Main training on DAVIS or finetune YouTube-VOS models on DAVIS: 50,000 steps (about 0.6 days for AOT-L with 4 Tesla V100 GPU).

@Limingxing00
Copy link
Author

Thank you for your quick reply!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants