New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The prediction is not well #39
Comments
I experience a similar issue. Even when the losses seem relatively low (roughly the same as in the article), the images are still somewhat blurry, like the one above. |
@harsanyika Have you solved the issue? |
Hi, I don't really understand your problem. The prediction looks ok to me. Not sure what you mean. |
@chrirupp The loss value is low, but compared to your model's prediction, the result is not very good. |
Looking at one image is not conclusive. Predictions can be different but performance may be similar. The better way to compare is to use RMS or relative error on the whole test set. This give a much more objective view. |
Thanks, I will try. |
@chrirupp Sorry to bother you again. I wonder when we train the model, how to deal with the ground truth label? Do we need to resize them to (128, 160) and compare with predictions? Or we don't deal with the labels, and up-sample the predictions to (480, 640) ? |
For training we down-sample the ground truth using nearest neighbor interpolation. For testing we up-sample the prediction to 640x480. |
Thank you so much~ |
@chrirupp when we test metrics such as rel, do we need to use ground truth as images? Or we just pack the ground truth to the .mat, not the format of images. Thanks. |
I am not sure what you are saying. You can find how to compute the relative error in our matlab code. How you store or load the data is up to you. |
@harsanyika Hi~ I am facing the same problem as you said. The Depth Prediction is blurry. Have you solved it? |
Not really. My reverse Huber loss is lower than the reverse Huber loss of downloadable results (but I used more images + more epochs + online data augmentation). However, my relative loss and RMS loss is higher, and I still experience the blurriness. I noticed that I reach my lowest loss sooner than suggested in the article (after 10-15 epoch). At that point, the LR is still high and as I lower the LR the network starts to overfit a little (I think). Right now I am unsure how to solve this problem. |
I restore the model that you provide, and finetune the layers except the resnet50.
I use Berhu loss. And I use AdamOptimizer to minimize my loss, the learning rate is 0.0001, and after 10000 steps the lr is 0.000001. The whole steps is 20000.
Here is my loss curve:
Here is the prediction that I use my model:
As you see, the loss value is small, but the prediction is not very well, it is ambiguous. Can you give me some advice? Thanks!
@iro-cp
The text was updated successfully, but these errors were encountered: