New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Benchmark models show different l2p,l2q from the paper #53
Comments
@holyhao For 40, 60, 80, 120 frame settings, you can reproduce it just with the I confirmed the exact same numbers from the paper with the setting above with the weights I uploaded. :) |
I will leave comment about the |
@jihoonerd Thansk for your reply. I follow the test_window as 65 for 30frames test and it shows the same results as the paper. But, there is some confusion for me . How can test_window infulence the results so much and the test_window only infulence the test data partition. |
@holyhao I also had same concern that you just mentioned. I agree with your opinion that it should only affect the how the data is prepared, not the end performance. My naive guess is that |
I download the benchmark models from the site, and test it on lanfan dataset. But the l2p and l2q are diffrent from the paper. I wonder if something wrong with my setting. Or, the benchmark models are not the best setting trained models.
The text was updated successfully, but these errors were encountered: