New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ensure cache coherence for views of time instances #15453
Conversation
Thank you for your contribution to Astropy! 🌌 This checklist is meant to remind the package maintainers who will review this pull request of some common things to look for.
|
38fe2ea
to
1eb8d00
Compare
Well, I wrote a bit too quickly about the simplicity - caches never are simple, I guess. But still the PR is not bad, it is just that some of the care one has to take is not entirely obvious. Anyway, now the tests pass (devdeps failures are unrelated). |
def __getstate__(self): | ||
# For pickling, we remove the cache from what's pickled | ||
state = (self.__dict__ if PYTHON_LT_3_11 else super().__getstate__()).copy() | ||
state.pop("_id_cache", None) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is necessary because WeakValueDictionary
cannot be pickled (the similar change in removal of TimeFormat
cache
is just because there is no sense to pickle a cache generally).
1eb8d00
to
6b921cc
Compare
No worries, the rebase of this was easier than the other way around! While rebasing, I also parametrized the test to check masked times as well, since masked instances do not own their data directly. It all feels a bit fragile -- caching is hard! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As noted, caching is difficult. But as far as I can tell this looks good. 🤞
Owee, I'm MrMeeseeks, Look at me. There seem to be a conflict, please backport manually. Here are approximate instructions:
And apply the correct labels and milestones. Congratulations — you did some good work! Hopefully your backport PR will be tested by the continuous integration and merged soon! Remember to remove the If these instructions are inaccurate, feel free to suggest an improvement. |
Owee, I'm MrMeeseeks, Look at me. There seem to be a conflict, please backport manually. Here are approximate instructions:
And apply the correct labels and milestones. Congratulations — you did some good work! Hopefully your backport PR will be tested by the continuous integration and merged soon! Remember to remove the If these instructions are inaccurate, feel free to suggest an improvement. |
Yes, I think it is OK - mostly because we test this well! Note that I can still backport, but it now needs to be manual given |
Description
This pull request ensures that the
Time
caches of formats and scales do not get out of sync with the actual data, even if another instance, holding a view of the data is written to. E.g., if one doest01 = t[:2]
, and setst[0]
after, it is now guaranteed thatt01.value
will correctly reflect that change in value.Fixes #15452
@taldcroft - this was considerably simpler than I thought! Though I think that perhaps the cache itself also needs to become a
WeakValueDictionary
so that, say, if one hastt = t.tt
and later doesdel tt
, the cache does not keeptt
alive. But probably better as a separate PR since that would not be a bug fix. EDIT: on second thought, not sureWeakValueDictionary
would work for "scale", since I think part of the original rationale was that one could dojd1 = t.tt.jd1; jd2 = t.tt.jd2
and not havett
calculated twice. Anyway, that's orthogonal to this PR.