Move locking operations onto the module's thread #2866
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description of Changes
This branch moves a lot of code from blocking threads onto the module thread. A common pattern was using
asyncify
to spawn a blocking task, which would then grab a db-level lock and do some work. This change introduces theon_module_thread
function onModuleHost
, which runs a function on the module thread (which was previously only used for reducer calls). The code inside is largely unchanged, so there are still locks used to guard access to things, but there should be less contention on them.This does limit some potential concurrency, since operations that only grab a shared lock are now serialized. In practice, this shouldn't be a big regression, since the locks we are using stop giving out shared locks as soon as one writer shows up.
The only real benefits to this PR are:
I think it moves toward a simplified concurrency model, where all module operations are sent to a single actor, but there are a lot more changes needed to get there.
Expected complexity level and risk
2
Testing
We have done some bot testing on a version of this, but unit testing it is not really feasible.