-
Notifications
You must be signed in to change notification settings - Fork 6.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor (Hyper)ClockCache code #10887
Conversation
Summary: For clean-up and in preparation for some other anticipated changes, including * A new dynamically-scaling variant of HyperClockCache * SecondaryCache support for HyperClockCache This change does some refactoring for current and future code sharing and reusability. (Including follow-up on facebook#10843) * TBD whether new variant will be a HyperClockCache or use some other name, so namespace is just clock_cache for the family of structures. * A number of helper functions introduced and used. * Pre-emptively split ClockHandle (shared among lock-free clock cache variants) and HandleImpl (specific to a kind of Table), and introduce template to plug new Table implementation into ClockCacheShard. * Mostly using helper functions. Some things like `Rollback()` and `FreeDataMarkEmpty()` were not combined because `Rollback()` is Table-specific while `FreeDataMarkEmpty()` can be used with different table implementations. * Performance testing indicated that despite more opportunities for parallelism, making a local copy of handle data for processing after marking an entry empty was slower than doing that processing before marking the entry empty (but after marking it "under construction"), thus avoiding a few words of copying data. At least for now, this answers the "TODO? Delay freeing?" questions (no). Test Plan: minor test updates in refactoring; no functionality change Same setup as facebook#10801: Before: `readrandom [AVG 81 runs] : 627992 (± 5124) ops/sec` After: `readrandom [AVG 81 runs] : 637512 (± 4866) ops/sec` I've been getting some inconsistent results on restarts like the system is not being fair to the two processes, so I'm not sure there's such a real difference.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. A couple of questions inline.
if (old_usage > capacity) { | ||
// Not too much to avoid thundering herd while avoiding strict | ||
// synchronization | ||
need_evict_charge += std::min(capacity / 1024, total_charge) + 1; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we try to use this strategy also for strict capacity limit? I guess it would be more complicated due to the hard requirement for a minimum amount to be evicted, but the perf benefit might be worth it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With strict capacity limit, compare_exchange ensures only one thread attempts to evict the excess, and the case only arises with SetCapacityLimit anyway.
Although I haven't tested the difference, I presume here that we don't want to pay for compare_exchange. It's just not important to be that accurate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if we're talking about the same thing. I meant in strict capacity limit case, would it be beneficial to evict more than necessary so fewer threads need to do eviction, like you're doing here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We only try to do extra if the cache exceeded capacity (old_usage > capacity).
In Evict we encourage some locality and "bulk" in evictions with step_size = 4
, increasing the chances of no eviction required. That's the best place to do it because it puts more leverage on each atomic update to the clock pointer.
#endif | ||
occupancy_.fetch_sub(1U, std::memory_order_release); | ||
Rollback(hashed_key, h); | ||
Rollback(h->hashed_key, h); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do this before marking the slot empty? Wouldn't it reduce some parallelism?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In theory yes, but as the NOTE on FreeDataMarkEmpty()
says, performance test shows that saving a copy of the data (hashed_key here) to enable that is more expensive than the potential benefits to parallelism.
@pdillinger has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
} | ||
} | ||
// Track new usage even if we weren't able to evict enough | ||
usage_.fetch_add(total_charge - evicted_charge, std::memory_order_relaxed); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apologies for the after-merge comments, it seems evicted_charge
can be larger than total_charge
, do it can cause add wrong value to usage_
? thanks :)
Summary: For clean-up and in preparation for some other anticipated changes, including
This change does some refactoring for current and future code sharing and reusability. (Including follow-up on #10843)
clock_cache.h
clock_cache.cc
Rollback()
andFreeDataMarkEmpty()
were not combined becauseRollback()
is Table-specific whileFreeDataMarkEmpty()
can be used with different table implementations.Test Plan: fixed a unit testing gap; other minor test updates for refactoring
No functionality change
Performance
Same setup as #10801:
Before:
readrandom [AVG 81 runs] : 627992 (± 5124) ops/sec
After:
readrandom [AVG 81 runs] : 637512 (± 4866) ops/sec
I've been getting some inconsistent results on restarts like the system is not being fair to the two processes, so I'm not sure there's such a real difference.