Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training always ends #12

Closed
yiren556 opened this issue Nov 4, 2022 · 3 comments
Closed

Training always ends #12

yiren556 opened this issue Nov 4, 2022 · 3 comments

Comments

@yiren556
Copy link

yiren556 commented Nov 4, 2022

d5aebaadc510b758dbc06b885fa9d7e
Excuse me?
Why does the process still end when the training reaches epoch9 after the configuration file is modified

@yiren556
Copy link
Author

yiren556 commented Nov 4, 2022

image
This is my modified configuration file

@byeonghu-na
Copy link
Owner

Hello, sorry for late reply.

I found that the baseline code also occurs same problem. (FangShancheng/ABINet#11, FangShancheng/ABINet#52, FangShancheng/ABINet#79)

From these replies and my trial, you should change the sum of optimizer.scheduler.periods in the configuration file to epochs, e.g. [400, 100] for 500 epochs.

@byeonghu-na
Copy link
Owner

For convenience, I add assertion of scheduler periods and epochs. (12fd00a)
Thank you for your contribution!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants