Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The prediction is not well #39

Closed
Ariel-JUAN opened this issue Dec 3, 2017 · 13 comments
Closed

The prediction is not well #39

Ariel-JUAN opened this issue Dec 3, 2017 · 13 comments

Comments

@Ariel-JUAN
Copy link

I restore the model that you provide, and finetune the layers except the resnet50.
I use Berhu loss. And I use AdamOptimizer to minimize my loss, the learning rate is 0.0001, and after 10000 steps the lr is 0.000001. The whole steps is 20000.
Here is my loss curve:
finetune
Here is the prediction that I use my model:
1
As you see, the loss value is small, but the prediction is not very well, it is ambiguous. Can you give me some advice? Thanks!
@iro-cp

@harsanyika
Copy link

I experience a similar issue. Even when the losses seem relatively low (roughly the same as in the article), the images are still somewhat blurry, like the one above.

@Ariel-JUAN
Copy link
Author

@harsanyika Have you solved the issue?

@chrirupp
Copy link
Collaborator

chrirupp commented Dec 4, 2017

Hi, I don't really understand your problem. The prediction looks ok to me. Not sure what you mean.

@Ariel-JUAN
Copy link
Author

@chrirupp The loss value is low, but compared to your model's prediction, the result is not very good.
Here is the prediction used the model you trained:
yuan
Here is the prediction used the model I trained:
shang
Have you seen the difference, I think it is just not as good as yours. Can you give me some advice?

@chrirupp
Copy link
Collaborator

chrirupp commented Dec 5, 2017

Looking at one image is not conclusive. Predictions can be different but performance may be similar. The better way to compare is to use RMS or relative error on the whole test set. This give a much more objective view.

@Ariel-JUAN
Copy link
Author

Thanks, I will try.

@Ariel-JUAN
Copy link
Author

@chrirupp Sorry to bother you again. I wonder when we train the model, how to deal with the ground truth label? Do we need to resize them to (128, 160) and compare with predictions? Or we don't deal with the labels, and up-sample the predictions to (480, 640) ?
Thanks~

@chrirupp
Copy link
Collaborator

chrirupp commented Dec 6, 2017

For training we down-sample the ground truth using nearest neighbor interpolation. For testing we up-sample the prediction to 640x480.

@Ariel-JUAN
Copy link
Author

Thank you so much~

@Ariel-JUAN
Copy link
Author

@chrirupp when we test metrics such as rel, do we need to use ground truth as images? Or we just pack the ground truth to the .mat, not the format of images. Thanks.

@chrirupp
Copy link
Collaborator

I am not sure what you are saying. You can find how to compute the relative error in our matlab code. How you store or load the data is up to you.

@luohongcheng
Copy link

@harsanyika Hi~ I am facing the same problem as you said. The Depth Prediction is blurry. Have you solved it?

@harsanyika
Copy link

Not really. My reverse Huber loss is lower than the reverse Huber loss of downloadable results (but I used more images + more epochs + online data augmentation). However, my relative loss and RMS loss is higher, and I still experience the blurriness.

I noticed that I reach my lowest loss sooner than suggested in the article (after 10-15 epoch). At that point, the LR is still high and as I lower the LR the network starts to overfit a little (I think). Right now I am unsure how to solve this problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants