Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request body read time metric #1569

Merged
merged 1 commit into from
Feb 20, 2019
Merged

Conversation

rianmcguire
Copy link
Contributor

Measure the time spent reading the HTTP request body and expose it to the Rack app as env['puma.request_body_wait'].

This can be combined with a timestamp from a upstream proxy to get an indication of how long a request was waiting for a Puma thread to become available.

Fixes #1541

Measure the time spent reading the HTTP request body and expose it to the Rack app as `env['puma.request_body_wait']`.

This can be combined with a timestamp from a upstream proxy to get an indication of how long a request was waiting for a Puma thread to become available.

Fixes puma#1541
@rianmcguire
Copy link
Contributor Author

Looks like I have a test failure on JRuby. I'll investigate that in the next couple of days.

@nateberkopec
Copy link
Member

This feature should be optional.

@schneems do you have any thoughts about we should organize stuff like this in general? Add to Puma.stats?

@schneems
Copy link
Contributor

schneems commented May 9, 2018

Puma.stats is mostly for high level server metrics. Also everything in there is pretty cheap to compute.

This would be a per request metric. The only good way to expose that would be either through the env hash like this or some kind of more architected notification API where people could subscribe to certain events.

Check out #1577 and the attached PR. The goal is to give a high level indication that the app needs to scale out with more servers. Is that kinda what you’re trying to achieve with these measurements?

@rianmcguire
Copy link
Contributor Author

@schneems Yes, that's exactly what I'm after.

I'm using the same approach as SamSaffron described with the upstream timestamp header. haproxy can't buffer the entire request body like nginx, so I wanted to account for the time spent in puma's reactor waiting for the body.

If there's a broader plan to make the Puma.stats backlog metric accurate, I'd prefer that to this very specific (and complicated) approach of calculating the wait time.

I like your "negative backpressure" idea. The disadvantage versus a backlog or wait time metric is it doesn't let you quantify how far over capacity you are, but I'm not sure how much that matters in practice.

@evanphx evanphx merged commit 3be7a8e into puma:master Feb 20, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants