New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Slot no more: overhauled internal algorithm #296
Conversation
Instead of grabbing the arc, just pass back an `&mut Runtime`. The eventual goal is to get rid of the lock on the `set` pathway altogether, but one step at a time.
Because dash-map isn't indexable, we need to store a copy of the key and have two separate maps. I expect to iterate on the best data structures here.
❌ Deploy Preview for salsa-rs failed. 🔨 Explore the source changes: d4a6b24 🔍 Inspect the deploy log: https://app.netlify.com/sites/salsa-rs/deploys/61f428116eb8450008d21f49 |
I finally got around to running our benchmarks. I ran our integration test benchmarks on salsa 0.16.0 (our current version in production), the master branch, plus the Here are the initial results:
In our case, I suspect part of the problem is that the majority of the preheating is routed through the same query (which looks up and executes dynamically registered entity finders by ID). Could this possibly be fixed by dynamically generating separate queries for each entity finder? |
@vlthr interesting. I don't think you should have to refactor your tests -- it seems surprising to see the problem being contention for the sync-map. Do you any way to measure the hit rate overall? Are your tests available on github? |
@nikomatsakis the code is unfortunately not publicly accessible, but if it helps I can DM more info and/or profiling/benchmark results. I wasn't able to measure the hit rate, but I had another look at the preheating code to figure out why it's causing so much contention. The way it was implemented was pretty naive: we have ~500 dynamically registered entity labellers accessible via a common query endpoint I'm guessing that having 50+ (or possibly more) threads hammering the same query might be close to the worst case for the |
@vlthr ok, so let me see here. The idea is that you have a lot of threads that are all trying to do the same query at once. Under this system, it probably does require an extra lock compared to the old system. You have to:
I believe the first lock is not used in the old system. Hard to imagine that's such a source of contention, but then, it might be! (Makes me want to experiment with alternatives to dashmap.) Still, seems like we should consider landing this. |
bors r+ |
Build succeeded: |
This is the overhauled implementation that avoids slots, is more parallel friendly, and paves the way to fixed point and more expressive cycle handling.
We just spent 90 minutes going over it. Some rough notes are available here, and a video will be posted soon.
You may find the flowgraph useful.