support gradients in TensorFlow for Python, PyMC3, and Stan models #164
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This pull request is a work in progress. Comments are welcome from interested parties, and also feedback on the implementation.
Gradients of the log density are now supported for all model wrappers for use within a TensorFlow session.
PythonModel
uses autograd's automatic differentiation.PyMC3Model
uses PyMC3's automatic differentiation.StanModel
uses Stan's automatic differentiation.In all wrappers, the gradient can be overwritten by manually specifying it. The manual gradient is written in native Python. This is useful for speeding up computation: manual gradients are often faster than automatic gradients (especially reverse-mode autodiff). It is also useful for the
PythonModel
in cases where its gradient is not supported by autograd.Examples showing proof of concept are added. Each example is a normal posterior; inference is done via variational inference with the reparameterization trick, which uses the model gradient. The model is written in 4 ways:
References.
tf.py_func
op.tf.RegisterGradient
.tf.Graph.gradient_override_map
.