New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
optimize memory allocation, change the default pool param and add the log of panic stack. #40
Conversation
Firstly thanks for your interest to |
Besides, if you intend to add new feature into |
Please check out the error meesage, you have to update the corresponding unit test file |
Sorry, I am negligent. I am fixing it. |
Codecov Report
@@ Coverage Diff @@
## master #40 +/- ##
==========================================
+ Coverage 97.8% 97.98% +0.18%
==========================================
Files 5 5
Lines 273 298 +25
==========================================
+ Hits 267 292 +25
Misses 3 3
Partials 3 3
Continue to review full report at Codecov.
|
@panjf2000 PTAL |
I've reviewed this PR and I will merge it into |
By the way, you said you faced a performance issue when you were unable to pre-malloc memory of workers slice, so you are proposing this PR. My question is: did you apply this memory optimization logic to your own |
Yes, I have applied it on my online server. which almost have 3 million app users with three server. Just as I say, Locking at high concurrency is like an atomic bomb.Even if I only use rand during the lock. Another things that I did for the performance is that I map the task to 10 pool, That is to say, each go program has about 5 million goroutines using 10 ants pools, each of which has a pool slice of 50 0000, which also improves the performance of slice locks like a sharding map. |
thx. I squash it. |
Before this, the pprof report almost always had tens of thousands or even hundreds of thousands of goroutines blocked on the retrieveWorker lock or other related locks. After the improvement, this situation will not appear again. |
Hi, I am using it on my online server which almost need 5 million goroutines on each go service.
I'm divided into 10 small pools, because a pool of five million will slow down the speed associated with the slice.
I made some small optimizations, I hope this is useful.
optimize memory allocation, change the default pool param and add the log of panic stack.
btw, the default value DEFAULT_CLEAN_INTERVAL_TIME, one second is too short-lived. when the pool size is too large , Performance will drop .