Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Include an example for finding optimal weights to reach a divergence goal #1

Closed
peteroupc opened this issue Apr 20, 2020 · 8 comments
Closed

Comments

@peteroupc
Copy link

peteroupc commented Apr 20, 2020

Include an example for finding the smallest set of weights (that is, the "precision" or the smallest sum of weights) that approximate a target probability distribution to a maximum tolerable divergence. Without an example, this is not exactly trivial to do because of the lack of documentation.

  • Input: Target distribution; maximum divergence tolerable (e.g., 2^-64); kind of divergence; option to limit the sum of weights to a power of 2. The target distribution can be a list of ratios, but it would be preferred if it can also be a probability mass function that supports mpmath numbers (but I admit that mpmath numbers are ultimately ratios too).
  • Output: Optimal probabilities in the form of integer weights.

EDIT: Clarification.

@fsaad
Copy link
Collaborator

fsaad commented Apr 24, 2020

@peteroupc Thanks for the ticket. If I understand correctly, the algorithm will keep increasing the precision until the optimal achievable error is below the specified maximum tolerable divergence, is that right?

@peteroupc
Copy link
Author

Correct; precision in the sense of the sum of weights.

@fsaad
Copy link
Collaborator

fsaad commented Apr 24, 2020

OK, and are you interested in the least such precision that satisfies the maximum tolerable divergence? (is that what you meant by "smallest set of weights"?)

@peteroupc
Copy link
Author

Yes that is what I mean.

@fsaad
Copy link
Collaborator

fsaad commented Apr 24, 2020

@peteroupc I sketched this example for you:
https://github.com/probcomp/optimal-approximate-sampling/blob/c269d6831bcedcae8ae11c5b719a8d97afe3b194/examples/maxerror.py

Let me know if it makes sense and implements the behavior your are interested. I specified the target distribution using Python floats---mpmath or Fraction instances should also fine.

@fsaad
Copy link
Collaborator

fsaad commented Apr 24, 2020

If you are allowing the precision to vary and are interested in an exact (zero-error) sampler, then you can consider the fast loaded dice roller, which finds zero-error and near-entropy optimal samplers for both integer and floating-point weights.

@peteroupc
Copy link
Author

The interest here is approximating probability mass functions—

  • that have non-integer powers,
  • that have transcendental functions, such as the Poisson distribution or most cases of the geometric distribution, or
  • that would have huge integer weights if represented exactly, such as factorials.

@peteroupc
Copy link
Author

The example works well, so closing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants