-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Include an example for finding optimal weights to reach a divergence goal #1
Comments
@peteroupc Thanks for the ticket. If I understand correctly, the algorithm will keep increasing the precision until the optimal achievable error is below the specified maximum tolerable divergence, is that right? |
Correct; precision in the sense of the sum of weights. |
OK, and are you interested in the least such precision that satisfies the maximum tolerable divergence? (is that what you meant by "smallest set of weights"?) |
Yes that is what I mean. |
@peteroupc I sketched this example for you: Let me know if it makes sense and implements the behavior your are interested. I specified the target distribution using Python floats---mpmath or Fraction instances should also fine. |
If you are allowing the precision to vary and are interested in an exact (zero-error) sampler, then you can consider the fast loaded dice roller, which finds zero-error and near-entropy optimal samplers for both integer and floating-point weights. |
The interest here is approximating probability mass functions—
|
The example works well, so closing. |
Include an example for finding the smallest set of weights (that is, the "precision" or the smallest sum of weights) that approximate a target probability distribution to a maximum tolerable divergence. Without an example, this is not exactly trivial to do because of the lack of documentation.
EDIT: Clarification.
The text was updated successfully, but these errors were encountered: