-
Notifications
You must be signed in to change notification settings - Fork 0
Rework the puma-helper output #18
Comments
Yep, I don't agree with the two lines per process though, I'm the one who asked for one because we have a lot in production and it means you can see twice as much in one console screen. Also @w3st3ry you (still) didn't take into account half of the things I asked in my last feedback (which I can't see any more because I sent it via PM in Slack), I gave you the benefit of the doubt but I guess by now I can consider you will never do them. This is not acceptable, you can't waste our time like that. You need to at least read what we say, and if you don't want to do something it must be written somewhere, with arguments, you can't just ignore. So OK I'll spend some time again to try to give you (mostly) the same feedback, please do it this time:
Example output (made quickly by hand):
|
This output looks better ;) I would like to add something about errors too.
We should have them at the end if any otherwise it is difficult to notice them
|
Here is a little update:
About CPU percentage, I will test to do something with |
Queuing is at worker level, no? not application? |
It's at worker level in puma stats yes, but It seems more interesting to me have a queuing overview and then to aggregate the value and display it at application level. WDYT ? |
Yep ok |
In order, to summarize all these changes:
Questions before adding in todo-list:
|
About the CPU percentage, what's the relation with go routines and locks? you can iterate on all apps to get the first number then wait 1 second, and then get the second number for all apps, and then print. You don't have to over-engineer this. |
I just gave this kind of solution in first because it's more compliant with the philosophy of the language. BTW after some change it should works like you said too and it's more easy to maintain for someone is not a full-time Gopher, I agree. Otherwise, did you agree about the summary? @jarthod @spuyet |
A couple typos but otherwise, yes:
It's the opposite: visible if there's an issue
→ to M About the queue I think it's "backlog" in the status json but I let @spuyet confirm. About "Removed threads number per process" I think @spuyet said we should remove the number but keep the load bar graph for worker (because it's almost always 1) but I'm not sure it's a good idea as we can have multiple threads in which case it'll not be 1. @spuyet WDYT? |
I've explained it in the body of this issue, that's the
Yep, I don't find the number really useful as we already have the global value, the load graph seems to be enough to me. |
Ok! |
Please checkout #21 about the UI. |
I think that we should rework the
puma-helper
output, the current output is really hard to read and we should usepassenger-status
as example:VS
Here is a first draft of something a bit readable and largely inspired from the passenger output:
I know that the go dependency that you're using doesn't support current CPU percentage and to be honest portability is a welcome feature but current CPU usage is a mandatory one IMHO, a software cannot be restricted only because of portability mostly for a tiny project like this one.
This is a first draft, comments and improvements are welcome.
Notes:
CPU percentage => use /proc/PID/stat
Uptime => use /proc/PID/stat too
Queued => corresponding to the backlog value per process in puma stats
The text was updated successfully, but these errors were encountered: