Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix idempotence of segment allocation and task report apis in native batch ingestion #11189

Merged
merged 15 commits into from
May 7, 2021

Conversation

jihoonson
Copy link
Contributor

@jihoonson jihoonson commented May 3, 2021

Description

Most internal APIs used in Druid's ingestion should be idempotent to handle transient errors. This PR fixes the idempotence of two APIs used in native batch ingestion.

The first API is the segment allocation API used in dynamic partitioning. Currently, transient network errors or task failures can lead to non-contiguous segment partitionIds allocated by this API. This can be a problem because PartitionHolder of those segments of non-contiguous partitionIds will be never complete in the broker timeline. As a result, everything will look fine, that means, the task will succeed, segments will be published into the metadata store, historicals will load and announce them, but you will never be able to query them.

To fix the segment allocation API, I had to add a new API that accepts extra parameters such as sequenceName to guarantee the idempotence. This will break the rolling upgrade that replaces idle nodes with a newer version one at a time. To resolve this issue, I added a taskContext, useLineageBasedSegmentAllocation, to control which protocol to use for segment allocation in dynamic partitioning. This option is true by default and must be set to false during rolling upgrade. The in-place rolling upgrade is not a consideration because batch ingestion doesn't support it.

The second API is the task report API used in all native batch ingestion types. This API can handle retries triggered by transient network errors, but cannot handle duplicate reports by task retries. As a result, if there is a task that failed after sending its report, the supervisor task will count both the report of the failed task and that of the retry task. Because of this bug, the parallel task can incorrectly estimate the cardinality of partition column and the distribution of partition column in hash and range partitioning, respectively.

Finally, to test the fix, I added random task failures and API call retries (emulating transient network failures) in AbstractParallelIndexSupervisorTaskTest. All unit tests extending this class, such as CompactionTaskParallelRunTest, HashPartitionMultiPhaseParallelIndexingTest, SinglePhaseParallelIndexingTest, and RangePartitionMultiPhaseParallelIndexingTest, now run with potential transient task failures and API call retries.

Upgrade path to 0.22:

  • If you upgrade data nodes first as recommended in https://druid.apache.org/docs/latest/operations/rolling-updates.html#rolling-restart-restore-based, there are nothing to do. All batch tasks with dynamic partitioning should succeed during the replacing rolling upgrade.
  • If you upgrade overlord before middleManagers, you must set druid.indexer.task.default.context = { "useLineageBasedSegmentAllocation": false } during the upgrade, and restore it to { "useLineageBasedSegmentAllocation": true } after the upgrade is finished.

Key changed/added classes in this PR
  • SinglePhaseParallelIndexTaskRunner
  • TaskMonitor
  • AbstractParallelIndexSupervisorTaskTest

This PR has:

  • been self-reviewed.
  • added documentation for new or modified features or behaviors.
  • added Javadocs for most classes and all non-trivial methods. Linked related entities via Javadoc links.
  • added or updated version, license, or notice information in licenses.yaml
  • added comments explaining the "why" and the intent of the code wherever would not be obvious for an unfamiliar reader.
  • added unit tests or modified existing tests to cover new code paths, ensuring the threshold for code coverage is met.
  • added integration tests.
  • been tested in a test Druid cluster.

@jihoonson
Copy link
Contributor Author

I think we need a cluster-wide configuration corresponding to the new taskContext. I will add it soon.

@jihoonson
Copy link
Contributor Author

jihoonson commented May 3, 2021

I added druid.indexer.task.useLineageBasedSegmentAllocation for middleManagers instead of taskContext.

@jihoonson
Copy link
Contributor Author

jihoonson commented May 3, 2021

I manually tested the behavior with druid.indexer.task.default.context = { "useLineageBasedSegmentAllocation": true} during rolling upgrade.

Copy link
Member

@clintropolis clintropolis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

overall approach lgtm

@@ -398,7 +398,7 @@ private StringFullResponseHolder submitRequest(
} else {
try {
final long sleepTime = delay.getMillis();
log.debug(
log.warn(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

Comment on lines +284 to +288
throw new ISE(
"Can't compact segments of non-consecutive rootPartition range. Missing partitionIds between [%s] and [%s]",
curSegment.getEndRootPartitionId(),
nextSegment.getStartRootPartitionId()
);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice 👍

final int maxTries,
@Nullable final CleanupAfterFailure cleanupAfterFailure,
@Nullable final String messageOnRetry,
boolean skipSleep
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: is skip sleep the test parameter i guess? maybe worth javadocs

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added javadocs.

|`taskLockTimeout`|300000|task lock timeout in millisecond. For more details, see [Locking](#locking).|
|`forceTimeChunkLock`|true|_Setting this to false is still experimental_<br/> Force to always use time chunk lock. If not set, each task automatically chooses a lock type to use. If this set, it will overwrite the `druid.indexer.tasklock.forceTimeChunkLock` [configuration for the overlord](../configuration/index.md#overlord-operations). See [Locking](#locking) for more details.|
|`priority`|Different based on task types. See [Priority](#priority).|Task priority|
|`useLineageBasedSegmentAllocation`|false|Enable the new lineage-based segment allocation protocol for the native Parallel task with dynamic partitioning. This option should be off during the replacing rolling upgrade to Druid 0.22 or higher. Once the upgrade is done, it must be set to true.|
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe worth elaborating on why, e.g. "...must be set to true to ensure data correctness"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggest also adding a note that this applies if upgrading from a pre-0.22.0 version

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated the doc per suggestion.

@@ -351,6 +355,7 @@ public boolean add(final Task task) throws EntryExistsException

// Set forceTimeChunkLock before adding task spec to taskStorage, so that we can see always consistent task spec.
task.addToContextIfAbsent(Tasks.FORCE_TIME_CHUNK_LOCK_KEY, lockConfig.isForceTimeChunkLock());
defaultTaskConfig.getContext().forEach(task::addToContextIfAbsent);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should we also set the use lineage config if it is absent to true here so that custom taskContext configs that are missing that setting do not run with false. The documentation for the default config would then no longer need to indicate that config is the default config since it would be implicit

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea. I changed the default of the default context to an empty map, and added useLineageBasedSegmentAllocation here.

Comment on lines 192 to 203
/**
* Transient task failure rate emulated by the taskKiller in {@link SimpleThreadingTaskRunner}.
* Per {@link SubTaskSpec}, there could be at most one task failure.
*/
private final double transientTaskFailureRate;

/**
* Transient API call failure rate emulated by {@link LocalParallelIndexSupervisorTaskClient}.
* This will be applied to every API calls in the future.
*/
private final double transientApiCallFailureRate;

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cool 👍

Comment on lines 88 to 89
"Cannot publish segments due to incomplete time chunk. Segments are [%s]",
segmentsPerInterval.stream().map(DataSegment::getId).collect(Collectors.toList())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 on sanity check... is there any chance the list of segments is huge here? (maybe we should use log.errorSegments to log segments and just include count/interval or something?)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for reminding me of that. Changed to use log.errorSegments and to not create a too large string for exception.

Copy link
Member

@clintropolis clintropolis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm 🤘

@jihoonson
Copy link
Contributor Author

@jon-wei @clintropolis thanks for the review!

@eeren0
Copy link

eeren0 commented Jun 22, 2021

Hi, I am wondering if this fix may be related to the issue observed in #11348 as well?

@clintropolis clintropolis added this to the 0.22.0 milestone Aug 12, 2021
jon-wei pushed a commit to jon-wei/druid that referenced this pull request Nov 22, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants