Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Asynchronous pruning for RubyThreadPoolExecutor #1082

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

joshuay03
Copy link

@joshuay03 joshuay03 commented Feb 8, 2025

Closes #1066
Closes #1075

Alternative to #1079

Implementation is based on the discussion in the linked issues.

@joshuay03 joshuay03 force-pushed the better-ruby-thread-pool-executor-pruning branch 15 times, most recently from b6e5656 to f90d46d Compare February 9, 2025 04:23
@joshuay03 joshuay03 force-pushed the better-ruby-thread-pool-executor-pruning branch 14 times, most recently from c5eca7d to 89b67f4 Compare February 14, 2025 15:54
@joshuay03 joshuay03 force-pushed the better-ruby-thread-pool-executor-pruning branch 10 times, most recently from 547428b to a675844 Compare February 22, 2025 01:33
@joshuay03 joshuay03 marked this pull request as ready for review February 22, 2025 01:40
@joshuay03 joshuay03 force-pushed the better-ruby-thread-pool-executor-pruning branch 7 times, most recently from fd02056 to 6cedac3 Compare March 17, 2025 08:52
Copy link
Collaborator

@eregon eregon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Finally taking a look at this, sorry for the delay

@joshuay03 joshuay03 force-pushed the better-ruby-thread-pool-executor-pruning branch 4 times, most recently from defd0f7 to 9739e9a Compare March 26, 2025 09:34
@joshuay03 joshuay03 force-pushed the better-ruby-thread-pool-executor-pruning branch from 9739e9a to 6aa4ee5 Compare March 26, 2025 09:40
Copy link
Collaborator

@eregon eregon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Queue stuff looks ready to me, i.e. I reviewed and approve that part.

The pool changes it's hard for me to tell because I haven't had time to understand the implementation in details (and likely won't have time for that soon).
It would be great if another concurrent-ruby maintainer could help review that (and they should feel free to merge this PR after they approve)

throw :stop
end

prunable = false
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it impossible if prunable_capacity is <= 0 that a worker would need to be prunable again?
i.e. what if prunable_capacity is 0 because all workers are busy but later on the pool is no longer running? and then we should prune/finish all workers?

timeout = prunable && my_pool.running? ? my_idletime : nil
case message = my_queue.pop(timeout: timeout)
when nil
if my_pool.prunable_capacity.positive?
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if my_pool.prunable_capacity.positive?
if my_pool.prunable_capacity > 0

I think this is clearer

@@ -95,8 +97,16 @@ def remaining_capacity
end

# @!visibility private
def remove_busy_worker(worker)
synchronize { ns_remove_busy_worker worker }
def prunable_capacity
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you add a comment documenting what does this return?

def remove_busy_worker(worker)
synchronize { ns_remove_busy_worker worker }
def prunable_capacity
synchronize { ns_prunable_capacity }
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't there a risk the value is outdated by the time it's used since it will be used outside of this synchronize block?


@next_gc_time = Concurrent.monotonic_time + @gc_interval
def ns_prunable_capacity
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you explain the logic here? Is it how many workers can be removed at the current time?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

CachedThreadPool does not spin down idle threads Unexpected pruning behaviour with consecutive task batches
3 participants