-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
use cache for predctions in likelihood #80
Conversation
That's a brilliant idea! Do I understand correctly that the idea is that the cost of hashing is negligible since it is done only once and is small compared to the calculation of all observables? Did you check how long it roughly takes (just to understand if it would affect e.g. a We should also add a comment to the doc string about the existence of caching and a comment in the code as to what the line defining Concerning the hashing, I see a potential problem with hashing For |
The hashing takes around 30 μs on my laptop. How fast is the fastest observable? Probably still some orders of magnitude slower?
OK, I will add some comments.
It might be good to actually recompute the hash of a hash((frozenset(wc.dict.items()),wc.basis,wc.scale)) Using such a hash, the hash for two different |
So for me, hashing the |
OK so in conclusion I think we can merge your PR already, with only
I can then separaretly implement hash((self.wc, frozenset(self._options))) if I am not mistaken. |
... actually since this is just 3 lines, you can just add this to your PR. in def __hash__(self):
"""Return a hash of the `WilsonCoefficient` instance.
This assumes that `self.wc` is not modified over its lifetime. The hash only changes when options are modified."""
hash((self.wc, frozenset(self._options))) |
Sorry, this docstring is misleading, as the attribute """Return a hash of the `WilsonCoefficient` instance.
The hash changes when Wilson coefficient values or options are modified.
It assumes that `wcxf.WC` instances are not modified after instantiation.""" |
I realized that this solution as of now has a memory leak: it caches all calls, which will quickly eat up all memory in a scan. So either we need a LRU cache or, much simpler, just cache the last value called. This should be sufficient for our use case. |
This actually does not work since |
Actually, |
@DavidMStraub I think this is ready to be merged. |
This PR implements caching for predictions of observables inside a
MeasurementLikelihood
instance.