-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add reporting of max_threads to status #12
Conversation
lib/puma/plugin/systemd.rb
Outdated
def to_s | ||
if clustered? | ||
"puma #{Puma::Const::VERSION} cluster: #{booted_workers}/#{workers} workers: #{running} threads, #{backlog} backlog" | ||
"puma #{Puma::Const::VERSION} cluster: #{booted_workers}/#{workers} workers: #{running} threads, #{backlog} backlog, #{max_threads} max_threads" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting! Without looking at puma's code, what is max_threads
— is that the configured maximum? Should this be:
"puma #{Puma::Const::VERSION} cluster: #{booted_workers}/#{workers} workers: #{running} threads, #{backlog} backlog, #{max_threads} max_threads" | |
"puma #{Puma::Const::VERSION} cluster: #{booted_workers}/#{workers} workers: #{running}/#{max_threads} threads, #{backlog} backlog" |
Does it also apply to the unclustered mode?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good question. The max_threads
stat represents the maximum number of threads a worker can scale up to use (https://github.com/puma/puma/blob/master/lib/puma/dsl.rb#L335). The implication there being that it alone doesn't represent the total possible. If we output it like you have it (which I'm open to) we'd simple need to have it be #{running}/#{max_threads * workers} threads
.
Yes, max_threads is applicable in non-clustered mode:
{"started_at"=>"2020-08-28T12:20:27Z", "backlog"=>0, "running"=>0, "pool_capacity"=>16, "max_threads"=>16}
What may be additionally interesting here is the pool capacity overall as that gives a view of how many requests can be processed at that point in time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, what is pool capacity? Is that the number of threads currently alive, i.e min threads != max threads?
I feel like this might be why I avoided putting these in here, as the status is a brief overview, not a comprehensive monitoring solution. 😅
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pool capacity is the current number of threads able to process requests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right. That makes "X/Z threads" and "Y/Z pool capacity" feel a bit at odds. There are "Y" threads, "X" of them are busy, and we might create up to "Z" threads total. The pool capacity is irrelevant for folks running min_threads == max_threads, Y will always be Z.
Perhaps X/Z threads (Y alive)
, the latter part only being included iff min_threads < max_threads?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, I read myself into knots on this one, and got pool capacity totally wrong multiple times despite your words.
But after reading the code, I think this makes sense — if we were running 2 workers, 8..16 threads, and had 8 requests in progress:
Status: "puma 4.3.3 cluster: 2/2 workers: 16/32 threads, 8 available, 0 backlog"
0de8f6a
to
d331277
Compare
Added pool capacity output and suggestion to show threads out of, this results in the following status line:
|
Thanks! |
Released in v0.1.5 |
No description provided.