New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
InD dataset results #5
Comments
Hi Miltiadis, Thanks for your interest in this work! I took a look at the settings used to generate the plot for the paper, and I realized the wrong eval args were being used in the script uploaded here. This has been changed (you'll want to look at |
Hi Colin, Thank you for your immediate response and for solving this issue. I am wondering, though, why is there such a gap in performance? From what I understand, the reported numbers in both scenarios are 40 steps in the future, even if in one case we are predicting the whole future of the scene. Is my assumption correct and if so, why is there such a performance gap? Thanks again for your time. |
This is a good question, and I don't know the answer for sure. A few possibilities are:
Of course, there could be another reason I'm missing, but these are the reasons I would investigate first. |
Ok, thanks a lot for your help. These were my first thoughts as well. I will investigate these directions further. |
Hello Colin,
Thank you for releasing the source code of this excellent work. I have some trouble replicating your results for the InD traffic trajectory dataset. I am running the default script you provide to train and evaluate the model and using the results from
eval_results_test_last_driver5burnin.txt
. The error that I get from these files is ~0.23 at 40 steps (averaged across 5 seeds). However, in your figures, the reported number is ~0.025. Do the reported numbers come from a different experimental setting and if so, how can I access them?Thank you in advance.
The text was updated successfully, but these errors were encountered: