Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

关于训练的问题 #13

Open
xywftbt opened this issue May 18, 2021 · 1 comment
Open

关于训练的问题 #13

xywftbt opened this issue May 18, 2021 · 1 comment

Comments

@xywftbt
Copy link

xywftbt commented May 18, 2021

您好,测试集的准确率与训练的时候训练集和验证集的损失不成正比,最好的测试集结果出现在第11代,后面随着损失的减小,准确率并没有提高,还有所下降,第11代的loss大概为0.2左右,请问是什么原因?尝试过很多次了,都这样,期待您的回复!

@594422814
Copy link
Owner

请问你的测试集指的是什么?是GOT10K吗,还是别的数据集集如OTB。
在训练时候,我注意到的现象是,训练集下的loss和验证集下的loss均是下降趋势直到平稳。训练完成后,我没有测试不同epoch的模型,直接取最后一轮(epoch 50)的模型进行各数据集的测试。但是我觉得不同epoch的模型性能可能确实有起伏,不知道第50epoch相比于11epoch有多少下降?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants