-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The evaluation results are different #25
Comments
Hi @Bigtuo, thank you for trying our code. There are so many researchers have tried our code and got duplicated results. If you are sure that your code does not have any bugs, there are some factors may impact the precision (but not in a large scale):
|
I didn't make any changes,and I use APOLLO's evaluation.py. In addition to your inconsistent results, the official baseline evaluation results are also inconsistent. How should I solve it |
@Bigtuo if you did not change the code, and directly run the "evaluation.py", you are expected to get a very bad result. |
thanks😁
…---Original---
From: ***@***.***>
Date: Sat, Jul 10, 2021 17:24 PM
To: ***@***.***>;
Cc: ***@***.******@***.***>;
Subject: Re: [xincoder/GRIP] The evaluation results are different (#25)
@Bigtuo if you did not change the code, and directly run the "evaluation.py", you are expected to get a very bad result.
Because they have already clearly describe that "./test_eval_data/prediction_gt.txt is just for testing the code which is not the real ground truth.", which means using testing on a fake ground truth. Please submit your result to their website to get true error. Thanks.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
epoch20
WSADE: 15.483413498840125
ADEv, ADEp, ADEb: 15.903188312281609 15.339882580782012 15.480199725137442
WSFDE: 15.815679429015853
FDEv, FDEp, FDEb: 16.56610081965146 15.41657854746371 16.185653216166408
The text was updated successfully, but these errors were encountered: