Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Decoder implementation #30

Closed
voldemortX opened this issue Aug 7, 2021 · 6 comments
Closed

Decoder implementation #30

voldemortX opened this issue Aug 7, 2021 · 6 comments

Comments

@voldemortX
Copy link

voldemortX commented Aug 7, 2021

@Turoad Great codes!

Is the best models uploaded for CULane only uses a plain decoder instead of the proposed decoder in the paper?
If so, could I ask why? Does the proposed decoder bring no further improvements?

@nostayup
Copy link

我也有这样的疑惑,为什么呢

@voldemortX
Copy link
Author

我试过是有提升的,就是跑速慢一点点

@nostayup
Copy link

你有留意到步长 lr 的变化吗?我没有改设置,为什么lr是从小到大呢?

@voldemortX
Copy link
Author

没注意到,应该是从大变小比较合理吧

@Turoad
Copy link
Member

Turoad commented Mar 4, 2022

Thanks for interest.
Yes, we only use a plain decoder for faster inference speed. @voldemortX

@Turoad
Copy link
Member

Turoad commented Mar 4, 2022

@nostayup If using warm up, the lr will become larger and then smaller.

@Turoad Turoad closed this as completed Mar 4, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants