Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

unclear behavior of horseshoe prior #85

Open
lcnature opened this issue Aug 24, 2019 · 0 comments
Open

unclear behavior of horseshoe prior #85

lcnature opened this issue Aug 24, 2019 · 0 comments

Comments

@lcnature
Copy link

lcnature commented Aug 24, 2019

Hi,
Thanks for the great package!
I was trying to understand the default prior placed on noise standard deviation (I suppose that is what the last few hyper-parameters mean in the code?
When I played around with the scale parameter of the implemented horseshoe prior, I think the distribution and the generated sample do not seem to match.
Here are the codes to generate some plots and you can see while the peak of the historgram of the generated sample is between -3 and -2, the log probability of the distribution still peaks at 0.

import matplotlib.pyplot as plt
import numpy as np
from moe.optimal_learning.python.base_prior import HorseshoePrior
horseshoe = HorseshoePrior(0.1)
x = horseshoe.sample_from_prior(1000)
plt.hist(x, 20)
plt.show()
plt.plot(x, horseshoe.lnprob(x), '.')
plt.show()

I am worried this may influence the behavior of the algorithm.
Any thought or any suggestion for alternative prior for the noise variance?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant