Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

small question about lr_scheduler #38

Closed
huixiancheng opened this issue Mar 25, 2021 · 3 comments
Closed

small question about lr_scheduler #38

huixiancheng opened this issue Mar 25, 2021 · 3 comments

Comments

@huixiancheng
Copy link

Thanks for the opensource code!!
Could you tell me the meaning of metric in lr step?

T2T-ViT/main.py

Lines 577 to 579 in f436fe4

if lr_scheduler is not None:
# step LR for next epoch
lr_scheduler.step(epoch + 1, eval_metrics[eval_metric])

T2T-ViT/main.py

Lines 688 to 689 in f436fe4

if lr_scheduler is not None:
lr_scheduler.step_update(num_updates=num_updates, metric=losses_m.avg)

In my understanding.Look like in timm it's don't have special meaning.

@yuanli2333
Copy link
Collaborator

Hi, lr_scheduler is the method to set up lr decay in training, such as multi-step decay and cosine decay. We use cosine decay in this work.

@huixiancheng
Copy link
Author

I know it's cosine_schedule with warm up.
But the ques is about update the ls after a epoch or batch(iter), there are two paras: epoch/num_updates and metric.
Since I don't use timm before,I cann't understand the meaning of Para:metric
In timm it don't have special meaning too. As the code show.
QQ截图20210325112532

@yuanli2333
Copy link
Collaborator

You need update ls in each epoch to tell it current epoch, which is the ls usage in torch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants