Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a dangerous_limit paramter for WithCPUDump method #37

Closed
Jun10ng opened this issue Jan 14, 2022 · 8 comments
Closed

Add a dangerous_limit paramter for WithCPUDump method #37

Jun10ng opened this issue Jan 14, 2022 · 8 comments

Comments

@Jun10ng
Copy link
Contributor

Jun10ng commented Jan 14, 2022

Hi team,

In my opinion, there is should be a dangerous_limit parameter means holmes will not dump profile when current CPU usage reached this limit, cuz CPU pprof usually waste some resource, commonly 5% or less, if holmes executes the CPU pprof causes CPU usage up 5% and result of the service crash, I won't hope that.

func WithCPUDump(min int, diff int, abs int)

change to

func WithCPUDump(min int, diff int, abs int, dangerous int)

or add a new withOption func

func WithDangerousLimit(d int)

I prefer the latter.

@Jun10ng
Copy link
Contributor Author

Jun10ng commented Jan 14, 2022

@taoyuanyuan
Copy link
Contributor

It's a good idea, and I prefer the latter too, we could configure not any CPU,but also mem and goroutine etc.
PR welcome!

@cch123
Copy link
Collaborator

cch123 commented Jan 15, 2022

in my previous experiments, if goroutine num is huge, eg. 100k
goroutine dump will also become a heavy action: stw && stack dump
maybe the goroutine number limit param should also be added~

@Jun10ng
Copy link
Contributor Author

Jun10ng commented Jan 15, 2022

in my previous experiments, if goroutine num is huge, eg. 100k goroutine dump will also become a heavy action: stw && stack dump maybe the goroutine number limit param should also be added~

how we measure this parameter, I mean it's difficult to have a common goroutine number limit on different machines, unlike CPU or memory can be measured in percent.

@cch123
Copy link
Collaborator

cch123 commented Jan 15, 2022

in my previous experiments, if goroutine num is huge, eg. 100k goroutine dump will also become a heavy action: stw && stack dump maybe the goroutine number limit param should also be added~

how we measure this parameter, I mean it's difficult to have a common goroutine number limit on different machines, unlike CPU or memory can be measured in percent.

Yes, it's hard to configure a universal value for every type of application load, if some users use this lib in their own app, maybe they can figure out a proper value with consideration on their own type of workload? I'm not sure

@Jun10ng
Copy link
Contributor Author

Jun10ng commented Jan 15, 2022

@cch123 ok, I will create a new pr, but before that, I want to discuss does this parameter control only goroutine profile dumping or all types profile dumping?

in other words,
change EnableDump

if err := h.EnableDump(cpu); err != nil {

or change
func (h *Holmes) goroutineCheckAndDump(gNum int) {

@cch123
Copy link
Collaborator

cch123 commented Jan 15, 2022

@cch123 ok, I will create a new pr, but before that, I want to discuss does this parameter control only goroutine profile dumping or all types profile dumping?

in other words, change EnableDump

if err := h.EnableDump(cpu); err != nil {

or change

func (h *Holmes) goroutineCheckAndDump(gNum int) {

only setting limit on goroutineCheckAndDump is ok, be careful about the naming~

@taoyuanyuan
Copy link
Contributor

taoyuanyuan commented Jan 16, 2022

In my opinion, if this goroutine parameter control all types profile dumping, use global EnableDump and WithCPUMax modify to WithMaxLimit(cpu, mem, goroutine), else control only goroutine profile, change to WithGoroutineDump(min int, diff int, abs int, max int).

@Jun10ng Jun10ng closed this as completed Jan 20, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants