Skip to content

Conversation

@jburnim
Copy link
Member

@jburnim jburnim commented Dec 9, 2020

No description provided.

ColCarroll and others added 30 commits December 8, 2020 10:26
… during MCMC.

The `PreconditionedHamiltonianMonteCarlo` distribution accepts a momentum distribution that it uses to sample momentum from. For example, if you suspect your posterior covariance is `S`, then using a kinetic energy function of `K(p) = 0.5 * (p.T S p)` may improve the performance of Hamiltonian Monte Carlo.

This transition kernel is intended to use to estimate a diagonal covariance matrix, by updating the underlying momentum distribution according to an online estimate of the posterior variance. Note that this is *not* a valid MCMC scheme, but the user may use the estimate this transition kernel produces to supply a fixed momentum distribution for sampling.

PiperOrigin-RevId: 343530087
Unfortunately, XLA:CPU will optimize away the correction term, making this useless, unless reassociation is disabled (which may make other reductions slower). (We add this env var in the test in the __name__=='__main__' block.)

However, TPU and GPU are where we would often want to use this, when resolving differences between large-magnitude `log_prob`s.

PiperOrigin-RevId: 343543748
PiperOrigin-RevId: 343545188
This adds multipart support to Chain, and introduces Restructure and JointMap bijectors for operations on structured inputs.

PiperOrigin-RevId: 343573225
…erarchy.

The solution is not beautiful. In order to construct a valid CT subclass *class
instance* to pass to the decorator, the subclass needs to have a _type_spec
property (else, ABCMeta complains and class creation fails, before the
decorator can ever run). So we're forced to implant a trivial _type_spec
definition, which is then overwritten by the decorator.

This change also refactors the internal logic of the auto_composite_tensor,
somewhat substantially, to separate a bit more the AutoCompositeTensorTypeSpec
behaviors from the decorator.

This change also uses the new behavior in `tfp.experimental.stats`.

PiperOrigin-RevId: 343575343
…st `from_stats`.

PiperOrigin-RevId: 343881821
When defining the inverse log-det-jacobian in terms of an automatic derivative, as we do here, its gradient requires *second* derivatives of the underlying inverse function. We therefore need to provide annotations for the second derivatives, using the inverse function theorem.

It's a bit of a mess to do this in a way that works for both TF and JAX, but it seems to have worked. I'd welcome suggestions for making this neater.

PiperOrigin-RevId: 343928308
Currently the HalfNormal distribution uses the default `log_prob(x) = log(prob(x))` implementation, which underflows for large x. Implementing log_prob directly improves stability and matches the implementation of the Normal distribution.

I replaced prob because there's no clear reason to keep both: relying on the default `prob(x) = exp(log_prob(x))` implementation is consistent with the Normal distribution, and all tests pass.

PiperOrigin-RevId: 343970631
…ad of unrolling huge TF graphs.

Also delete dynamic shape tests, because it doesn't make sense to test
the `from_shape` constructor against the shape not actually being
known.  Should replace them with dynamic shape tests of `from_example`
later.

PiperOrigin-RevId: 344124189
Subclass `_parameter_properties` methods did take an eps argument in an earlier revision, but in the checked-in version it's just dtypes all the way down.

PiperOrigin-RevId: 344274572
…t-ratio.

This is what tfp.mcmc.HMC does.

PiperOrigin-RevId: 344309370
… valid correlation matrices in more cases.

PiperOrigin-RevId: 344321285
…he Hypothesis-level timeout from the command line.

PiperOrigin-RevId: 344326945
…formedDistribution`s.

PiperOrigin-RevId: 344932582
This is due to travis-ci.org being deprecated.

One immediate advantage to the switch is that Github Actions has more powerful
machines which aren't shared across projects in an as obvious way as Travis
machines were. That said, they're not powerful enough to eschew test sharding
altogether.

PiperOrigin-RevId: 345051868
This fixes math formatting on tensorflow.org, and gives more descriptive variable names to some variables.

PiperOrigin-RevId: 345053833
…SIS=1.

Turns out the underlying Hypothesis was "randomizing" by initializing
its PRNG from the system time, which is just asking for trouble
whenever more than one processor is involved in a test run.

PiperOrigin-RevId: 345065008
PiperOrigin-RevId: 345073056
The correct reference should be:
[3] E. Schmidt, über die Auflosung linearer Gleichungen mit unendlich vielen
      Unbekannten, Rend. Circ. Mat. Pulermo (1) 25:53-77 (1908).
brianwa84 and others added 20 commits December 8, 2020 10:31
The real issue is illustrated by the test added to joint_distribution_test, where
prior to this fix you couldn't actually compute gradients with respect to the
samples of joint distributions due to some structure mismaches. The fix is
likely extremely overkill, but to the extent that the documentation for
`make_sharded_log_prob_parts` doesn't single out lists/tuples as the only
supported structure, this seems an acceptable thing to do.

PiperOrigin-RevId: 345327288
PiperOrigin-RevId: 345363927
…fd.Sample`.

This can substantially increase the precision of accumulation for log_prob evaluations over many independent elements. Note the fast-math caveat for XLA:CPU.

PiperOrigin-RevId: 345453227
When initializing, the chain bijector checks the inverse_event_ndims of each bijector. In certain cases, this was not defined and threw an exception (see unit test for one case that would trigger this).

PiperOrigin-RevId: 345568297
…of streaming MCMC.

Supports:
- Reductions
- Streaming (by not tracing) when only the reductions are wanted
- Tracing intermediate results of the reductions
- Without having to know about `WithReductions` or `finalize` or
  any of the other machinery that makes this work under the hood.

The `sample_fold` driver is now basically redundant, and should
probably be removed (or at least rewritten as an alias of
`run_kernel`).

The original `sample_chain` driver in non-experimental tfp.mcmc is now
also basically redundant, and should probably be rewritten as an alias
for `run_kernel`, and possibly also deprecated.

Some remaining limitations and possible additional future directions
in code comments.

PiperOrigin-RevId: 345740916
…affine bijectors.

Specifically, the bijector must either be scalar, or it must be a purely structural  transformation (as embodied by the new property `_is_permutation`).

PiperOrigin-RevId: 346150970
For numpy, the covariance is divided by N-1.

PiperOrigin-RevId: 346203390
PiperOrigin-RevId: 346227340
PiperOrigin-RevId: 346338660
@review-notebook-app
Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@google-cla
Copy link

google-cla bot commented Dec 9, 2020

We found a Contributor License Agreement for you (the sender of this pull request), but were unable to find agreements for all the commit author(s) or Co-authors. If you authored these, maybe you used a different email address in the git commits than was used to sign the CLA (login here to double check)? If these were authored by someone else, then they will need to sign a CLA as well, and confirm that they're okay with these being contributed to Google.
In order to pass this check, please resolve this problem and then comment @googlebot I fixed it.. If the bot doesn't comment, it means it doesn't think anything has changed.

ℹ️ Googlers: Go here for more info.

@google-cla google-cla bot added the cla: no Declares that the user has not signed CLA label Dec 9, 2020
@googlebot
Copy link

We found a Contributor License Agreement for you (the sender of this pull request), but were unable to find agreements for all the commit author(s) or Co-authors. If you authored these, maybe you used a different email address in the git commits than was used to sign the CLA (login here to double check)? If these were authored by someone else, then they will need to sign a CLA as well, and confirm that they're okay with these being contributed to Google.
In order to pass this check, please resolve this problem and then comment @googlebot I fixed it.. If the bot doesn't comment, it means it doesn't think anything has changed.

ℹ️ Googlers: Go here for more info.

@jburnim
Copy link
Member Author

jburnim commented Dec 9, 2020

@googlebot I fixed it.

All commits in this PR are cherry-picked from https://github.com/tensorflow/probability/tree/master (or written by me).

@jburnim jburnim added cla: yes Declares that the user has signed CLA and removed cla: no Declares that the user has not signed CLA labels Dec 9, 2020
@googlebot
Copy link

A Googler has manually verified that the CLAs look good.

(Googler, please make sure the reason for overriding the CLA status is clearly documented in these comments.)

ℹ️ Googlers: Go here for more info.

1 similar comment
@google-cla
Copy link

google-cla bot commented Dec 9, 2020

A Googler has manually verified that the CLAs look good.

(Googler, please make sure the reason for overriding the CLA status is clearly documented in these comments.)

ℹ️ Googlers: Go here for more info.

@jburnim jburnim merged commit cc2c37e into tensorflow:r0.12 Dec 9, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla: yes Declares that the user has signed CLA

Projects

None yet

Development

Successfully merging this pull request may close these issues.