-
-
Notifications
You must be signed in to change notification settings - Fork 3k
perf: add __slots__
to SubtypeContext
#19397
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Follow up to afd5a38.
According to mypy_primer, this change doesn't affect type check results on a corpus of open source code. ✅ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice, thanks! This shouldn't harm, but should probably have less impact: one SubtypeContext
is reused in nested is_subtype
calls, while visitors are created in each of them.
Adding A potential workaround to noise is to add a few different optimizations in a branch and measure their overall impact. If they add up to close to 1.0% performance improvement, we can be reasonably sure that they are beneficial in aggregate. |
Ok. Good to know! 🙂 |
Just out of interest, I did another run to see how this and Stanislav's previous one stack over the commit before:
So neither of them are significant. I'll leave this up to you as to whether you want to merge or close it - it won't do any harm, but isn't going to make much of a dent as you say. |
I thought about that, but finally ruled this out: mixing multiple updates together means some of them can be masked regressions, with total improvement being caused by other changes. Maybe we should write a benchmark suite for subtype checks instead? Something that doesn't take tens of minutes to run. |
This only makes sense if all the updates are new optimizations that you are working on. If you can't measure the impact of an individual optimization, it can be hard to decide whether to merge it or not. If you have 5 optimizations that together improve performance by 1.0%, it will be easier to make the decision to accept all the optimizations as a group. Even if some of them might actually be regressions, we can live with it since we have a high confidence that together they improve performance significantly. (There still needs to be some plausible justification why each optimization is likely to improve performance.) |
Let's close this since managing |
Follow up to afd5a38.