Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

To determine which model is used during testing #11

Closed
yzhou-saic opened this issue Mar 16, 2018 · 2 comments
Closed

To determine which model is used during testing #11

yzhou-saic opened this issue Mar 16, 2018 · 2 comments

Comments

@yzhou-saic
Copy link

During testing, how can we determine the best model to be used ?
For example, to repeat the results on the KITTI leading board, how can you determine the model to be used ? Do you just use the last model when 120,000 iterations are finished ?

@kujason
Copy link
Owner

kujason commented Mar 16, 2018

As with many other methods on the leaderboards, the network is trained on a private custom split of only the provided KITTI training data for better generalization. Our training procedure and hyperparameter selection is outlined in the paper, and the plot_ap.py script is provided to make it easier to select a good checkpoint, so feel free to experiment and see what works best for you.

@kujason kujason closed this as completed Mar 16, 2018
@yzhou-saic
Copy link
Author

But in order to repeat the results reported in the paper:
During testing (not validation), there is no way to use plot_ap.py
Do you just use the last model when 120,000 iterations are finished ?
Or do you pick up the same iteration which is the best on validation set ?

@melfm melfm added the -____- label Mar 16, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants