New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Complete separation in time of scenes (like test set in trajnetplusplusdata) #8
Comments
Hello, However, you can easily use the test set generator present in the current repository to generate the testing split for other external datasets. Hope this helps! |
So when we create other test sets from external datasets, we have to make sure that these common frames are not an issue. Correct? |
Correct. Note: The current Predictor code (for eg. LSTMPredictor) already does that by inputting only obs_length frames as input. Hope this addresses the concern. |
Thank you for your response and help so far! However, for LSTM I'm still getting some "-1" on Col-I metric for certain categories of trajectories (type I - static). The same actually happens on Kalman Filter, even when performing the fix you mentioned on this recent issue. However, the Col-II metric has a > 0 value. What does it mean to give -1 on Col-I metric for some trajectory types, but the global Col-I metric is not -1? I feel like this might also be the same issue, but I'll have to investigate further. |
Hi Pedro, The '-1' Col-I metric indicates that you are not providing prediction of all the neighbors in the observed scene. If this was not the case, a submission won't provide any prediction of neighbors and end up getting ideal Col-I value of 0. To remove this corner case, the '-1' value was introduced. Note: On the AICrowd challenge, instead of '-1', the evaluator outputs '100' |
Thank you for the quick response!
If there are several dataset files, and one of them does not have trajectories of a certain category (but the others do), that shouldn't result in the Col-I metric being -1 right? I feel like that's what is happening at the moment. I will do some more experiments in this regard, with a small subset of data. Thank you, once again! |
I was now experimenting with kalman filter, and just realized a possible problem with the available implementation in trajnetppbaselines. It relates to the neighbour predictions being 'lost'. First, a I opened a PR for it on trajnetppbaselines |
Yes, this is indeed a bug. Thanks for fixing it. Merged the PR. |
Hi. Thank you for having the conversion code available for use.
Currently the test set available in data repository does not have the same configuration as the train one. Particularly, the start and end instants of a scene are always different, even though there are portions of tracks that are common to more than one scene.
I was wondering if there was an existing option to convert datasets in that format too, however I failed to find it. Is it publicly available in this repository?
When start and end instants are not completely separate for different scenes (which is why I get when converting data using this code), I have some troubles in running the evaluation, because it's retrieving the last 12 instants for some tracks (because they are apart of the observation portion for other tracks). I can circumvent it if I force what goes in to the models as only the first obs_len positions, so it's not really a big issue. But I feel like this could affect other people that aren't aware of that behaviour.
Thank you for reading. Have a good day!
The text was updated successfully, but these errors were encountered: