Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Call experimental new clock cache HyperClockCache #10684

Closed
wants to merge 4 commits into from

Conversation

pdillinger
Copy link
Contributor

@pdillinger pdillinger commented Sep 15, 2022

Summary: This change establishes a distinctive name for the experimental new lock-free clock cache (originally developed by @guidotag and revamped in PR #10626). A few reasons:

  • We want to make it clear that this is a fundamentally different implementation vs. the old clock cache, to avoid people saying "I already tried clock cache."
  • We want to highlight the key feature: it's fast (especially under parallel load)
  • Because it requires an estimated charge per entry, it is not drop-in API compatible with old clock cache. This estimate might always be required for highest performance, and giving it a distinct name should reduce confusion about the distinct API requirements.
  • We might develop a variant requiring the same estimate parameter but with LRU eviction. In that case, using the name HyperLRUCache should make things more clear. (FastLRUCache is just a prototype that might soon be removed.)

Some API detail:

  • To reduce copy-pasting parameter lists, etc. as in LRUCache construction, I have a MakeSharedCache() function on HyperClockCacheOptions instead of NewHyperClockCache().
  • Changes -cache_type=clock_cache to -cache_type=hyper_clock_cache for applicable tools. I think this is more consistent / sustainable for reasons already stated.

For performance tests, see #10626

Test Plan: no interesting functional changes; tests updated

Summary: This change establishes a distinctive name for the experimental
new lock-free clock cache (originally developed by @guidotag and
revamped in PR facebook#10626). A few reasons:
* We want to make it clear that this is a fundamentally different
implementation vs. the old clock cache, to avoid people saying "I
already tried clock cache."
* We want to highlight the key feature: it's fast (especially under
parallel load)
* Because it requires an estimated charge per entry, it is not drop-in
API compatible with old clock cache. This estimate might always be
required for highest performance, and giving it a distinct name
should reduce confusion about the distinct API requirements.
* We might develop a variant requiring the same estimate parameter but
with LRU eviction. In that case, using the name HyperLRUCache should
make things more clear. (FastLRUCache is just a prototype that might
soon be removed.)

Some API detail:
* To reduce copy-pasting parameter lists, etc. as in LRUCache
construction, I have a `MakeSharedCache()` function on
`HyperClockCacheOptions` instead of `NewHyperClockCache()`.
* Changes -cache_type=clock_cache to -cache_type=hyper_clock_cache for
applicable tools. I think this is more consistent / sustainable for
reasons already stated.

I'm not yet announcing this in HISTORY.md because it is considered
experimental and I haven't put all the advantages and caveats in the
public API yet.

Test Plan: no interesting functional changes; tests updated
@facebook-github-bot
Copy link
Contributor

@pdillinger has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@ltamasi
Copy link
Contributor

ltamasi commented Sep 15, 2022

  • We want to highlight the key feature: it's fast (especially under parallel load)

How about OverclockCache then? :)

Copy link
Contributor

@anand1976 anand1976 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@facebook-github-bot
Copy link
Contributor

@pdillinger has updated the pull request. You must reimport the pull request before landing.

@facebook-github-bot
Copy link
Contributor

@pdillinger has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants