-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How many samples are used for evaluation in End-to-End leaderboard? #4
Comments
|
Thanks for kindly feedback. |
@neeharperi Here you use 6 timesteps, which means future 6s trajectories, because the dataset is 1Hz sampled. However, the competition requires to forecast next 3s. Is there anything misaligned here? |
Great catch! I think our configuration changed during debugging, we will push an update to EvalAI. All you need to do is change this line from 10 to 5 and re-run create_data.py. The original sensor dataset is collected at 10 Hz. (After the fix described above) We subsample this by a factor of 5 to match nuScenes 2Hz sampling rate. 6 timesteps sampled at 2 Hz is 3 seconds. |
Very clear. Then I understand how to subsample the sensor dataset for forecasting. Another quesiton about data: |
Yes, you are correct that the first frame does not have any history. I don't believe it will have a significant impact on forecasting accuracy. It is up to the method to determine the right way to address this case when there is no history. |
Hello,
I have question about the frames number used for evalution in End-to-End Forecasting Leaderboard.
Thanks.
The text was updated successfully, but these errors were encountered: