Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

combined model loss #1

Open
udion opened this issue Aug 30, 2018 · 6 comments
Open

combined model loss #1

udion opened this issue Aug 30, 2018 · 6 comments

Comments

@udion
Copy link

udion commented Aug 30, 2018

Hey,
great work @hutec.

I have a doubt though, in the paper "What Uncertainties Do We Need in Bayesian Deep
Learning for Computer Vision?", the loss function doesn't invoke the eps and T = 20 # Number of Monte Carlo Integration steps for adding noise to the uncertainty. Your loss function is not clear to me, could you please elaborate a little? or point to some resources which build this concept thoroughly and mathematically.

Thanks

@hutec
Copy link
Owner

hutec commented Sep 1, 2018

Hey.

sorry, this was early code form the development process. And I think you are right, that this is not the loss described in the "What Uncertainties Do We Need..." paper.

However in training/combined_training.py is a more understandable implementation of the loss from equation 8 in the paper. (Mean is replaced with sum there).

Hope that helps, otherwise feel free to ask.

@udion
Copy link
Author

udion commented Sep 2, 2018

Hi,
So I was trying out the loss function as given in the paper (as you mentioned training/combined_training.py) for one of my application. I have noticed that my log_sigma values reduces to 0 (sigma = 1), and my loss function reduces to original loss function (one without uncertainty), I was wondering what constraint stops network from learning log_sigma=1 ?

Any help appreciated.

Thanks

@udion
Copy link
Author

udion commented Sep 5, 2018

@hutec

In my application, I try to train the network with the above mentioned loss function, but my log_sigma values keep dropping to zero, any clues?

@hutec
Copy link
Owner

hutec commented Sep 5, 2018

Sorry, I'm currently busy. Spontaneously I have no clues. If I find time, I may look into it.

@udion
Copy link
Author

udion commented Sep 12, 2018

@hutec

I think the loss function (where you use monte carlo integration) is for the classification task (and not regression). It's mentioned in the paper. Although the maths is still not entirely clear to me.

@udion
Copy link
Author

udion commented Sep 15, 2018

@hutec

What are your thoughts on this? do you think the results can be improved, any suggestions?

https://udion.github.io/post/uncertain_deepl/

Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants