Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The test results of the pre-trained model are inconsistent with the benchmark #5

Closed
glorioushonor opened this issue Dec 25, 2023 · 1 comment

Comments

@glorioushonor
Copy link

I have successfully tested the pre-trained model, but I found that the results do not match the data in the benchmark, especially for Normal. Is this normal? Looking forward to your reply.
image

By the way, the results of GTA in your two papers are quiet different.
image

image

@River-Zhang
Copy link
Owner

Thank you for using our code and providing feedback. It appears that the test data you're using includes CAPE, which is composed of CAPE-NFP and CAPE-FP. In our experiments, we observed that the results on CAPE-FP tend to be better than those on CAPE-NFP. Therefore, your results likely fall between these two. Regarding the differing results in the two papers, we suggest referring to the latest paper. This is due to the fact that in our initial project, we utilized data from ICON, which had some inaccuracies in the CAPE dataset. These inaccuracies were subsequently corrected, leading to potentially improved results with the revised data.

Regarding the normal results, it appears that we inadvertently commented out the following code. This was a testing technique we discovered in ECON's code that can minimize discrepancies. Nevertheless, please note that all models were tested using the same code framework. You can try again, and if you have any problems please contact us!

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants