Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NUTS sampler has no attribute 'accepted_results' and fails step size adaptation #549

Closed
janosh opened this issue Sep 7, 2019 · 5 comments
Closed

Comments

@janosh
Copy link
Contributor

@janosh janosh commented Sep 7, 2019

The following code fails with the error AttributeError: 'NUTSKernelResults' object has no attribute 'accepted_results'

import tensorflow as tf
import tensorflow_probability as tfp

tfd = tfp.distributions

normals_2d = [
    tfd.MultivariateNormalDiag([0, 0], [1, 1]),
    tfd.MultivariateNormalDiag([4, 4], [1, 1]),
]
bimodal_gauss = tfd.Mixture(tfd.Categorical([1, 1]), normals_2d)

@tf.function
def sample_chain(*args, **kwargs):
    return tfp.mcmc.sample_chain(*args, **kwargs)

step_size = 1e-3
# The HamiltonianMonteCarlo kernel works with both simple and dual averaging adaptation.
# kernel = tfp.mcmc.HamiltonianMonteCarlo(
#     bimodal_gauss.log_prob, step_size=step_size, num_leapfrog_steps=3
# )
# On the other hand, the NoUTurnSampler fails with both.
kernel = tfp.mcmc.NoUTurnSampler(bimodal_gauss.log_prob, step_size=step_size)
adaptation_steps = 100
# adaptive_kernel = tfp.mcmc.SimpleStepSizeAdaptation(
#     kernel, num_adaptation_steps=adaptation_steps
# )
adaptive_kernel = tfp.mcmc.DualAveragingStepSizeAdaptation(
    kernel, num_adaptation_steps=adaptation_steps
)

chain, trace = sample_chain(
    kernel=adaptive_kernel, num_results=100, current_state=tf.constant([0.0, 0.0])
)
@junpenglao
Copy link
Collaborator

@junpenglao junpenglao commented Sep 7, 2019

tfp.mcmc.*StepSizeAdaptation does not work out of the box with NUTS as it does not have the same previous_kernel_result structure. You will need to manually specify the function to get/set step size, and log_accept_prob:

adaptive_kernel = tfp.mcmc.DualAveragingStepSizeAdaptation(
    kernel, num_adaptation_steps=adaptation_steps,
    step_size_setter_fn=lambda pkr, new_step_size: pkr._replace(step_size=new_step_size),
    step_size_getter_fn=lambda pkr: pkr.step_size,
    log_accept_prob_getter_fn=lambda pkr: pkr.log_accept_ratio,
)

Note that it is again different if you are using a transformed kernel.

@junpenglao junpenglao closed this Sep 7, 2019
@janosh
Copy link
Contributor Author

@janosh janosh commented Sep 7, 2019

I see. I assume that will change in the future? Also, I think I read somewhere about plans to add subgradient adaptation methods like Adam? Is that still on the menu?

@junpenglao
Copy link
Collaborator

@junpenglao junpenglao commented Sep 7, 2019

I assume that will change in the future?

We are working on improving mcmc - will take this limitation into account for sure :-)

Also, I think I read about somewhere about plans to add subgradient adaptation methods like Adam? Is that still on the menu?

Not sure I follow, there is already Adam in TF that you can used as optimizer, are you referring to some specific algorithm that related to step size adaptation or nuts?

@janosh
Copy link
Contributor Author

@janosh janosh commented Sep 7, 2019

Not sure I follow, there is already Adam in TF that you can used as optimizer, are you referring to some specific algorithm that related to step size adaptation or nuts?

Found again where I read that. It was a comment by @SiegeLordEx.

@junpenglao
Copy link
Collaborator

@junpenglao junpenglao commented Sep 7, 2019

Oh I see, thanks for the context. It is interesting indeed and I have not thought of that... assuming we can get the gradient of the step_size re log_accept_ratio it should be doable. Not sure if @SiegeLordEx had look into it yet but.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
2 participants