Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

perf: improve perf by caching unchanged histogram strings between gets #555

Closed

Conversation

shappir
Copy link
Contributor

@shappir shappir commented Mar 12, 2023

Histogram.get() returns the content of the histogram as JSON that gets converted into a string representation by Registry.getMetricsAsString(). This conversion can be expensive, especially when there are many labels / label values / buckets, because each combination needs to be converted. This overhead can be reduced by caching the string representations for each entry in the histogram hash. This is because when there are many combinations, it's likely that most won't change between getting the metrics.

In order to perform the caching, the conversion to string needs to happen inside Histogram.get(). To accomplish this Registry.getMetricsAsString() will now pass into it an optional value-to-string conversion function. If such a function is provided then the caching will happen, otherwise the same behavior as before.

It's possible to easily extend this behavior to the other metrics types. However they won't get such a significant improvement as the conversion operation for them is less expensive per value.

note: registry benchmark has been modified to perform an update of an eighth if a histogram between getting the metrics string. This is so that it's not all cached. This is because full caching is so fast the benchmark explodes.

@SimenB
Copy link
Collaborator

SimenB commented Mar 12, 2023

This conflicts with #549 - but the optimisations here might've been applied there?

@shappir
Copy link
Contributor Author

shappir commented Mar 12, 2023

Is that what you prefer? That I apply my optimizations on top of that PR? Or to wait until it's merged and then review/merge this PR separately?

Also, can you review and provide feedback on this in any event, even if you don't want to merge yet?

@SimenB
Copy link
Collaborator

SimenB commented Mar 12, 2023

#549 has landed on master, which is why this PR has conflicts. If the optimisations in this PR are still applicable, I'd love to land them 🙂

The changes in this PR in isolation looks good 👍

@shappir
Copy link
Contributor Author

shappir commented Mar 14, 2023

Checking to determine if still applicable. In the interim closing this PR

@shappir shappir closed this Mar 14, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants