Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training code #43

Closed
tarashakhurana opened this issue Aug 26, 2020 · 5 comments
Closed

Training code #43

tarashakhurana opened this issue Aug 26, 2020 · 5 comments

Comments

@tarashakhurana
Copy link

Do you plan to release your training code sometime in the future? It would be really helpful to advance the research on monocular depth estimation!

If not, can you explain how the Pareto optimatility is ensured during training? It seems like there will also have to be an undo step in the training pipeline such that whenever the Pareto optimum is reached and the next backpropagation update disturbs this state, this update will have to be reversed.

@ranftlr
Copy link
Collaborator

ranftlr commented Aug 28, 2020

There are no plans to release the training code.

I'm not sure if I understand your question right. The algorithm that we are using is based on multiple gradient descent. As such it converges to a Pareto-stationary point, which means that there is no update if you are in a pareto-optimal point. However, in practice this is only approximately true, as we are working with stochastic approximations of the objectives. Did you have a look at the corresponding paper that covers the algorithm (https://arxiv.org/abs/1810.04650)?

@Sankar-CV
Copy link

Dear author's,

Thank you so much for such a great work and for sharing the trained model. The proposed MiDaS model works on diverse scenes with significant performance. This is one of the great contributions for the research community. I would like understand two points here:

(1) The depth map inferred from pre-trained MiDaS model is in the form of inverse depth, even after doing inversion I am not able to get absolute depth. What is the method to get absolute depth from the inverse depth?

(2) The quality of depth predicted from pre-trined MiDaS model per frame is really good but it is also inconsistent/jittery over a video sequence. It would be grateful if the training code is also accessible so that MiDaS model can be improved further.

@vasavamsi
Copy link

Dear Authors,

Can you please provide the train script for the model, as it would be really helpful for see the performance of the model on other datasets.

Regards,

@ranftlr
Copy link
Collaborator

ranftlr commented Sep 13, 2020

@Sankar-CV See for example #42 for answers surrounding relative depth.

There are still no plans to release the training code.

@ranftlr
Copy link
Collaborator

ranftlr commented Oct 20, 2020

Closing due to inactivity.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants