-
Notifications
You must be signed in to change notification settings - Fork 164
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Compare caching implementation with Caffeine #104
Comments
A quick benchmark shows the new cache to be about 2x the throughput of a synchronized LinkedHashMap in LRU mode. Its crawling thread skews benchmarks a bit as there doesn't appear to be a clean way to halt it afterwards, which hurts other caches by stealing its cpu time. So I can't check this in as it is too disruptive to my tooling.
The efficiency shows the cache uses a FIFO-like policy. Its slightly worse than FIFO due to using buckets (aka segments) which may cause premature evictions. The
|
I apologize for the late reply. The benchmark results for Caffeine look impressive, congrats @ben-manes ! It would be nice to see the same benchmarks executed with 256 / 512 / 1024 threads, just to check for performance degradation. I am going to provide an easy way to stop the crawling thread, and I will write again when it is ready. It would be great to see Rapidoid cache officially in the benchmarks. :) Regarding the integration of Caffeine into Rapidoid:
|
Perhaps then you might prefer embedding ConcurrentLinkedHashMap instead. That was a popular library that we ported parts into Guava to add caching functionality. It's small and easy to embed, e.g. Groovy. Most users, e.g. Cassandra, have upgraded to Caffeine. CLHM doesn't use any threads and amortizes maintenance on calling threads. That would give you most of the performance. However it is strictly a concurrent LRU map. You would need to add memoization and expiation as decorators. The library's code is fairly straightforward so you could prune away the general map methods. If you were up to it, you could add expiration directly into the map if use a time bounded fifo approach. All that might sound complex until you understand the design; slides. It uses a write ahead log to record and replay events. This avoids contention as the number of threads or cores increase, as it's appending to a buffer rather than locking. If you go down that approach, generalizing the api to plugin Caffeine might not be necessary. You'd have a small, built in, and fast cache. |
Thanks for sharing the resources/slides, the design/architecture looks smart and interesting. Since Rapidoid's built-in cache is already good-enough as a default, I wouldn't switch to CLHM, because it requires significant effort for me to reimplement the same features again on top of CLHM. Last week I made some improvements and implemented proper resource clean-up of the Rapidoid's cache. Starting from v5.3.3, simply calling Finally, I am still optimistic about including Caffeine into the Rapidoid platform in future (the framework targets JDK 7+ for now, but the platform is on JDK 8). |
I'll try to get to it in the next day or two. If you'd like to take a stab at it, see this commit adding ExpiringMap. The only difference would be to override default methods to call shutdown. |
Sure, I would give it a try later this week... |
It might be simpler to drop in https://github.com/ben-manes/caffeine that writing your own cache implementation.
The text was updated successfully, but these errors were encountered: