You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
During testing, how can we determine the best model to be used ?
For example, to repeat the results on the KITTI leading board, how can you determine the model to be used ? Do you just use the last model when 120,000 iterations are finished ?
The text was updated successfully, but these errors were encountered:
As with many other methods on the leaderboards, the network is trained on a private custom split of only the provided KITTI training data for better generalization. Our training procedure and hyperparameter selection is outlined in the paper, and the plot_ap.py script is provided to make it easier to select a good checkpoint, so feel free to experiment and see what works best for you.
But in order to repeat the results reported in the paper: During testing (not validation), there is no way to use plot_ap.py
Do you just use the last model when 120,000 iterations are finished ?
Or do you pick up the same iteration which is the best on validation set ?
During testing, how can we determine the best model to be used ?
For example, to repeat the results on the KITTI leading board, how can you determine the model to be used ? Do you just use the last model when 120,000 iterations are finished ?
The text was updated successfully, but these errors were encountered: