Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

optimize memory allocation, change the default pool param and add the log of panic stack. #40

Merged
merged 1 commit into from Jul 26, 2019

Conversation

Anteoy
Copy link
Contributor

@Anteoy Anteoy commented Jul 26, 2019

Hi, I am using it on my online server which almost need 5 million goroutines on each go service.
I'm divided into 10 small pools, because a pool of five million will slow down the speed associated with the slice.
I made some small optimizations, I hope this is useful.
optimize memory allocation, change the default pool param and add the log of panic stack.
btw, the default value DEFAULT_CLEAN_INTERVAL_TIME, one second is too short-lived. when the pool size is too large , Performance will drop .

@panjf2000
Copy link
Owner

panjf2000 commented Jul 26, 2019

Firstly thanks for your interest to ants and your branch failed to build CI: build failed due to the my previous code changes error, so please git pull --rebase onto your branch and re-submit this PR, thanks~

@panjf2000
Copy link
Owner

panjf2000 commented Jul 26, 2019

Besides, if you intend to add new feature into ants, please make sure that you implement the new feature in both pool.go, worker.go and pool_func.go, worker_func.go.

@panjf2000
Copy link
Owner

Please check out the error meesage, you have to update the corresponding unit test file ants_test.go.

@Anteoy
Copy link
Contributor Author

Anteoy commented Jul 26, 2019

Sorry, I am negligent. I am fixing it.

ants.go Outdated Show resolved Hide resolved
ants_test.go Show resolved Hide resolved
worker.go Show resolved Hide resolved
@codecov
Copy link

codecov bot commented Jul 26, 2019

Codecov Report

Merging #40 into master will increase coverage by 0.18%.
The diff coverage is 100%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master      #40      +/-   ##
==========================================
+ Coverage    97.8%   97.98%   +0.18%     
==========================================
  Files           5        5              
  Lines         273      298      +25     
==========================================
+ Hits          267      292      +25     
  Misses          3        3              
  Partials        3        3
Impacted Files Coverage Δ
worker.go 100% <100%> (ø) ⬆️
pool_func.go 98.34% <100%> (+0.14%) ⬆️
worker_func.go 100% <100%> (ø) ⬆️
pool.go 98.31% <100%> (+0.13%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update f447bf1...5dc8b9a. Read the comment docs.

@Anteoy
Copy link
Contributor Author

Anteoy commented Jul 26, 2019

@panjf2000 PTAL

@panjf2000
Copy link
Owner

I've reviewed this PR and I will merge it into master after you use git rebase to squash all commits in this PR, here is the instruction you can refer: squash-commits-into-one-git

@panjf2000
Copy link
Owner

panjf2000 commented Jul 26, 2019

By the way, you said you faced a performance issue when you were unable to pre-malloc memory of workers slice, so you are proposing this PR. My question is: did you apply this memory optimization logic to your own ants and get an enhancement in your go services as expected?

@Anteoy
Copy link
Contributor Author

Anteoy commented Jul 26, 2019

By the way, you said you faced a performance issue when you were unable to pre-malloc memory of workers slice, so you are proposing this PR. My question is: did you apply this memory optimization logic to your own ants and get an enhancement in your go services as expected?

Yes, I have applied it on my online server. which almost have 3 million app users with three server. Just as I say, Locking at high concurrency is like an atomic bomb.Even if I only use rand during the lock. Another things that I did for the performance is that I map the task to 10 pool, That is to say, each go program has about 5 million goroutines using 10 ants pools, each of which has a pool slice of 50 0000, which also improves the performance of slice locks like a sharding map.

@Anteoy
Copy link
Contributor Author

Anteoy commented Jul 26, 2019

I've reviewed this PR and I will merge it into master after you use git rebase to squash all commits in this PR, here is the instruction you can refer: squash-commits-into-one-git

thx. I squash it.

@Anteoy
Copy link
Contributor Author

Anteoy commented Jul 26, 2019

By the way, you said you faced a performance issue when you were unable to pre-malloc memory of workers slice, so you are proposing this PR. My question is: did you apply this memory optimization logic to your own ants and get an enhancement in your go services as expected?

Before this, the pprof report almost always had tens of thousands or even hundreds of thousands of goroutines blocked on the retrieveWorker lock or other related locks. After the improvement, this situation will not appear again.

@panjf2000 panjf2000 merged commit 3e1c7a0 into panjf2000:master Jul 26, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants