Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Batch Indexing short circuits #11913

Closed
wants to merge 18 commits into from

Conversation

capistrant
Copy link
Contributor

@capistrant capistrant commented Nov 11, 2021

Description

Add configurations to index_hadoop and index task type tuning configs that allow for certain ingestion jobs to short circuit early if they are determined to breach the configured thresholds that have been added by this PR. The defaults for these circuit breaker configs are turned off by default.

short circuit 1: maxSegmentsIngested - short circuits the ingestion job if it is determined that the job will generate a number of segments greater than what is specified in the tuningConfig

short circuit 2: maxIntervalsIngested - short circuits the ingestion job if it is determined that the job will generate a number of segment intervals greater than what is specified in the tuningConfig

These short circuits only apply in certain scenarios:

  • index_hadoop
    • hashed partitioning
      • both the segments circuit breaker and intervals circuit breaker are in effect if the job has to determine partitions
    • single dim partitioning
      • only the intervals circuit breaker is in effect if the job has to determine intervals at runtime
  • index
    • dynamic partitioning
      • only the intervals circuit breaker is in effect if the job has to determine intervals at runtime
    • hashed partitioning
      • both the segments circuit breaker and intervals circuit breaker are in effect if the job has to determine partitions

why not have both circuit breakers in effect for all batch job types?

  • first of all, the circuit breakers only really make sense when the intervals and/or partitions are generated at runtime because this is a spec level config and if the spec knows the partitions and/or intervals due to explicit inclusion in the spec, the thresholds are not really needed.. the spec creator just needs to not submit that spec.
  • It was not obvious how this would be implemented for index_parallel due to the architecture of that task type
  • It is not possible for dynamic partitioning in the index task type, because the partitions are generated dynamically when the segments are being generated.
  • It was not obvious if it was possible to do the segments threshold for single_dim in index_hadoop

Why is this useful?

  • Prevented jobs of unexpected or undesired size.
    • my use case is a multi-tenant cluster where data engineers ingest their own data. we have control over the spec submit, but not what underlying data the spec points to. That is why it is helpful for us to inject these configs to prevent jobs from creating an obscene amount of segments that would hurt the quality of service for other users on the multi-tenant cluster.
  • even in the case where the spec and data owner are the same. it may be useful for the spec owner to add these in order to guard against mistakenly creating something they did not intend to (perhaps generating far more number of segments than they wanted due to misunderstanding of underlying data being ingested)

Key changed/added classes in this PR
  • JobHelper
  • IndexTask
  • HashPartitionAnalysis
  • HadoopTuningConfig
  • HadoopDruidDetermineConfigurationJob

This PR has:

  • been self-reviewed.
  • added documentation for new or modified features or behaviors.
  • added Javadocs for most classes and all non-trivial methods. Linked related entities via Javadoc links.
  • added comments explaining the "why" and the intent of the code wherever would not be obvious for an unfamiliar reader.
  • added unit tests or modified existing tests to cover new code paths, ensuring the threshold for code coverage is met.
  • added integration tests.
  • been tested in a test Druid cluster.

@stale
Copy link

stale bot commented Apr 19, 2022

This pull request has been marked as stale due to 60 days of inactivity. It will be closed in 4 weeks if no further activity occurs. If you think that's incorrect or this pull request should instead be reviewed, please simply write any comment. Even if closed, you can still revive the PR at any time or discuss it on the dev@druid.apache.org list. Thank you for your contributions.

@stale stale bot added the stale label Apr 19, 2022
@capistrant
Copy link
Contributor Author

done close

@stale
Copy link

stale bot commented Apr 19, 2022

This issue is no longer marked as stale.

@stale stale bot removed the stale label Apr 19, 2022
Copy link

github-actions bot commented Dec 6, 2023

This pull request has been marked as stale due to 60 days of inactivity.
It will be closed in 4 weeks if no further activity occurs. If you think
that's incorrect or this pull request should instead be reviewed, please simply
write any comment. Even if closed, you can still revive the PR at any time or
discuss it on the dev@druid.apache.org list.
Thank you for your contributions.

@github-actions github-actions bot added stale and removed stale labels Dec 6, 2023
Copy link

github-actions bot commented Feb 6, 2024

This pull request has been marked as stale due to 60 days of inactivity.
It will be closed in 4 weeks if no further activity occurs. If you think
that's incorrect or this pull request should instead be reviewed, please simply
write any comment. Even if closed, you can still revive the PR at any time or
discuss it on the dev@druid.apache.org list.
Thank you for your contributions.

@github-actions github-actions bot added the stale label Feb 6, 2024
Copy link

github-actions bot commented Mar 6, 2024

This pull request/issue has been closed due to lack of activity. If you think that
is incorrect, or the pull request requires review, you can revive the PR at any time.

@github-actions github-actions bot closed this Mar 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant