Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature: Return a histogram for response time #43

Open
remeika opened this issue Nov 9, 2017 · 6 comments
Open

Feature: Return a histogram for response time #43

remeika opened this issue Nov 9, 2017 · 6 comments

Comments

@remeika
Copy link

remeika commented Nov 9, 2017

This exporter should have the option to expose response times as a histogram, rather than a simple average. The dataset to support this feature is already available in the upstreamZones. requestMsecs.msecs data structure from VTS.

Proposed Interface: An optional command line flag -nginx.response.histogram_buckets is added, and takes a comma separated list of integers as an argument. When declared, two additional metrics are implemented: {NAMESPACE}_filter_responses and {NAMESPACE}_upstream_responses. They are both histograms with bucket sizes from the input flag.

Note: I am actively building feature; any comments on the interface please let me know!

@sysulq
Copy link
Owner

sysulq commented Nov 10, 2017

Nice job! 😃

@jorgv
Copy link

jorgv commented Mar 8, 2018

Hi @remeika , do you have an update on that feature? would be nice to test it

@discordianfish
Copy link

I just realize that the average can be very misleading: When you look at the individual timeseries, things are okayish - they might just gloss over some (e.g very slow) requests.
But once you end up aggregating multiple response time averages, you end up with completely useless metrics since different upstreams have different sample frequencies.

Moreover, I don't think the proposed design works either since the VTS data structure is also just sampled, so you again miss some requests and you can't compare two upstreams.

I think the only way to properly solve this is histogram buckets in nginx, e.g each request needs to be counted into one bucket.

@remeika
Copy link
Author

remeika commented Apr 20, 2018

Ping @BertHartm

@steffenmueller4
Copy link

steffenmueller4 commented May 28, 2018

Any development already in place here? Also, very interested in this feature :-)

@discordianfish
Copy link

I don't think this is possible with the current state of the vts module, it would have to be implemented there.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants