Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Entry stats #159

Closed
mingard opened this issue Sep 9, 2019 · 9 comments · Fixed by #184
Closed

Entry stats #159

mingard opened this issue Sep 9, 2019 · 9 comments · Fixed by #184

Comments

@mingard
Copy link
Contributor

mingard commented Sep 9, 2019

Being able to see the number of times a cached resource was requested is a useful metric, especially at the point of removal.

As a production example, OnRemove could be used to trigger cache pre-warming based on the frequency of cache hits on a resource within a given window.

I'm aware of the overhead of storing such data by default and suggest this only as an optional parameter.

Thoughts?

@siennathesane
Copy link
Collaborator

That already exists, see here. Or is this something different?

@mingard
Copy link
Contributor Author

mingard commented Sep 11, 2019

@mxplusb those are global stats but not specific to the Entry. I was thinking EntryInfo could be extended to handle some Entry stats, and be exposed somehow. Currently the only way to see EntryInfo is to use EntryInfoIterator could be heavy when used in a hot path, and also doesn't yet store stats.

@cristaloleg
Copy link
Collaborator

Currently we don't have this feature, but the idea sounds reasonable. Store per item stats might be useful.

I'm afraid of few obvious things:

  1. memory consumption
  2. additional work (in terms of CPU and latency)

To solve 1st we might ask for a max memory for stats (treat this as a leaking stats).
Regarding the 2nd it's an open question. I know for sure that such stats should be optionally enabled.

Any suggestions or design ideas @mingard ?

@mingard
Copy link
Contributor Author

mingard commented Sep 11, 2019

@cristaloleg I think definitely making this optional and default to disabled.

In respect to the memory issue, I suppose it's worth testing, but as long as the stats are minimal perhaps just a request count per entry, and they are properly disposed of when the entry is removed, it shouldn't be too heavy.

In relation to CPU where a number will need to be incremented each time an Entry is accessed perhaps using an atomic counter would lower the resource required.

@siennathesane
Copy link
Collaborator

memory consumption

I don't think the memory consumption will be terribly high, we could limit it to uint64 types, and then just roll over if we hit the limit. That's 8 bytes per entry, so m = 8 * numEntries for guaranteed memory usage. Assuming my math is correct, there would need to be 134MM cache entries to consume 1GB of memory on cache entry stats, which seems like a fair tradeoff. There will be a small bit of overhead for stack allocation to increase the stats, but it will be cleaned after the stack returns.

additional work (in terms of CPU and latency)

I think this one should be tested, so long as it's not part of the core code path then it should be fine. So long as we ship the stats to a channel after the cache has been read or written to, it shouldn't be enough of an impact for latency to be that noticeable. We could always provide a set of recommendations, like using a multi-core system for best performance.

Either way, I'm happy to review a PR and see the initial implementation. :)

@mingard
Copy link
Contributor Author

mingard commented Sep 13, 2019

@mxplusb I'll do my best!

cristaloleg pushed a commit that referenced this issue Nov 5, 2019
@mingard
Copy link
Contributor Author

mingard commented Nov 30, 2019

@cristaloleg is it possible to release this?

@cristaloleg
Copy link
Collaborator

@mingard done! see v2.1.4

@mingard
Copy link
Contributor Author

mingard commented Dec 2, 2019

@cristaloleg thank you so much!

flisky pushed a commit to flisky/bigcache that referenced this issue May 7, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants