-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[depth_dbm_freecell_solver] Implement extracted items batches of a user-specified size #7
Labels
Comments
shlomif
added a commit
that referenced
this issue
Dec 6, 2016
shlomif
added a commit
that referenced
this issue
Dec 6, 2016
Implement #7 and in the process avoid excessive locking and unlocking for each added item. TODO: test.
Implemented in commit a8bdd2b with some additional cleanups. Closing |
shlomif
added a commit
that referenced
this issue
May 21, 2020
shlomif
added a commit
that referenced
this issue
May 23, 2020
shlomif
added a commit
that referenced
this issue
Sep 21, 2021
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
This is an attempt to improve the depth_dbm_fc_solver's multi-threading scalability which is currently very poor. In an attempt to reduce the amount of locking and increase CPU utilisation per thread, we want to:
Define a
--batch-size
argument which can specify an arbitrarily large count of elements that each thread will extract from the queue at a time.In each thread, extract these items, and without locking: decode them, calculate their derived positions (segregated in an array per irreversible-move-depth). Then, lock the states collections and insert them all into the central depth collections. Repeat.
This is instead of extracting one item from the queue at a time.
Most of this should be done in https://github.com/shlomif/fc-solve/blob/master/fc-solve/source/depth_dbm_solver.c for now. Please use a new feature git branch.
The text was updated successfully, but these errors were encountered: