New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add new context parameter for using concurrent locks #15684
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changes look good, left a minor comment.
Also, there seem to be some coverage issues. Maybe add a couple more tests for the new context parameter.
...s-core/multi-stage-query/src/main/java/org/apache/druid/msq/util/MultiStageQueryContext.java
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Merging as failure is unrelated. |
Changes: - Add new task context flag useConcurrentLocks. - This can be set for an individual task or at a cluster level using `druid.indexer.task.default.context`. - When set to true, any appending task would use an APPEND lock and any other ingestion task would use a REPLACE lock when using time chunk locking. - If false (default), we fall back on the context flag taskLockType and then useSharedLock.
Changes: - Add new task context flag useConcurrentLocks. - This can be set for an individual task or at a cluster level using `druid.indexer.task.default.context`. - When set to true, any appending task would use an APPEND lock and any other ingestion task would use a REPLACE lock when using time chunk locking. - If false (default), we fall back on the context flag taskLockType and then useSharedLock.
This PR introduces a new task context flag
useConcurrentLocks
.This can be set for an individual task or at a cluster level using
druid.indexer.task.default.context
.When set to true any appending task would use an APPEND lock and any other ingestion task would use a REPLACE lock when using time chunk locking.
If not, we fall back on the context flag
taskLockType
and thenuseSharedLock
.This PR has: