-
Notifications
You must be signed in to change notification settings - Fork 225
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The ubstable experimental results in voc split1 3shot #107
Comments
Hey, sorry for the delayed response. One thing we advocate in the paper is to have multiple runs and compare the average performance and 95% confidence interval due to high variance in the few shot settings. We introduced a new benchmark with repeated runs and the variance interval. You may refer to the complete results of both base and novel classes on Pascal VOC and COCO can be found in Table 7 and Table 8 in the appendix of the arXiv version. https://arxiv.org/pdf/2003.06957.pdf Hope it helps! |
Thank you for your reply. In Tables 7 and 8, you run multiple times with different sample training shots. The fluctuations are normal. But I used the same training shots without changing the config, and I still saw a lot of volatility. I think this is abnormal. |
Different random seeds might affect the results as well. That was the
motivation for us to introduce new evaluation benchmark for reliable
evaluation.
…On Thu, May 13, 2021 at 9:00 PM hero-y ***@***.***> wrote:
Thank you for your reply. In Tables 7 and 8, you run multiple times with
different sample training shots. The fluctuations are normal. But I used
the same training shots without changing the config, and I still saw a lot
of volatility. I think this is abnormal.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#107 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABVXTBCGCLED465BW7NU6OTTNSN7BANCNFSM43EIGKIA>
.
|
Hello, I use a GPU for training on voc split1 3shot. I reduce the learning rate by 8 times and increase the number of training iterations by 8 times. I use the same configuration to train twice, but the results of the two are very different (48.4 nAP50 and 45.6 nAP50). I think this is abnormal, do you know the reason?
Thanks!
The text was updated successfully, but these errors were encountered: