Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

prevent npe on mismatch between number of kafka partitions and task count #5139

Merged
merged 1 commit into from
Dec 20, 2017
Merged

prevent npe on mismatch between number of kafka partitions and task count #5139

merged 1 commit into from
Dec 20, 2017

Conversation

pjain1
Copy link
Member

@pjain1 pjain1 commented Dec 5, 2017

2017-12-05T20:15:35,610 WARN [KafkaSupervisor-<datasource>-Reporting-0] io.druid.indexing.kafka.supervisor.KafkaSupervisor - Lag metric: Kafka partitions [16, 1, 51, 36, 21, 6, 56, 41, 26, 11, 46, 31] do not match task partitions [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 22, 23, 24, 25, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59]
2017-12-05T20:15:35,610 WARN [KafkaSupervisor-<datasource>-Reporting-0] io.druid.indexing.kafka.supervisor.KafkaSupervisor - Unable to compute Kafka lag
java.lang.NullPointerException
	at java.util.HashMap.merge(HashMap.java:1224) ~[?:1.8.0_131]
	at java.util.stream.Collectors.lambda$toMap$58(Collectors.java:1320) ~[?:1.8.0_131]
	at java.util.stream.ReduceOps$3ReducingSink.accept(ReduceOps.java:169) ~[?:1.8.0_131]
	at java.util.HashMap$EntrySpliterator.forEachRemaining(HashMap.java:1691) ~[?:1.8.0_131]
	at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) ~[?:1.8.0_131]
	at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) ~[?:1.8.0_131]
	at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708) ~[?:1.8.0_131]
	at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:1.8.0_131]
	at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499) ~[?:1.8.0_131]
	at io.druid.indexing.kafka.supervisor.KafkaSupervisor.getLagPerPartition(KafkaSupervisor.java:2113) ~[druid-kafka-indexing-service-0.11.1-1512178916-1232c9d-1815.jar:0.11.1-1512178916-1232c9d-1815]
	at io.druid.indexing.kafka.supervisor.KafkaSupervisor.lambda$emitLag$19(KafkaSupervisor.java:2143) ~[druid-kafka-indexing-service-0.11.1-1512178916-1232c9d-1815.jar:0.11.1-1512178916-1232c9d-1815]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_131]
	at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [?:1.8.0_131]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_131]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [?:1.8.0_131]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]

@pjain1 pjain1 added this to the 0.12.0 milestone Dec 5, 2017
@pjain1 pjain1 added the Bug label Dec 5, 2017
@himanshug
Copy link
Contributor

👍

@@ -2117,7 +2117,7 @@ private void updateLatestOffsetsFromKafka()
&& latestOffsetsFromKafka.get(e.getKey()) != null
&& e.getValue() != null
? latestOffsetsFromKafka.get(e.getKey()) - e.getValue()
: null
: Integer.MIN_VALUE
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe is the below better because we don't have to add unnecessary lag values?

private Map<Integer, Long> getLagPerPartition(Map<Integer, Long> currentOffsets)
  {
    if (latestOffsetsFromKafka == null) {
      return ImmutableMap.of();
    }

    return currentOffsets
        .entrySet()
        .stream()
        .filter(e -> latestOffsetsFromKafka.get(e.getKey()) != null && e.getValue() != null)
        .collect(
            Collectors.toMap(
                Map.Entry::getKey,
                e -> latestOffsetsFromKafka.get(e.getKey()) - e.getValue()
            )
        );
  }

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't wanted to filter so that its visible that there is mismatch between task count and number of available Kafka partitions. Currently, whenever total lag is calculated, x -> Math.max(x, 0) is used so setting lag to Integer.MIN_VALUE won't add to the total.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

However, if you prefer this then we can do this as well.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Both looks good to me. If you think the current patch is better to check there is mismatch between task count and number of partitions, please go for it.

@pjain1 pjain1 merged commit c56a980 into apache:master Dec 20, 2017
@pjain1 pjain1 deleted the fix_npe_lag branch December 20, 2017 22:23
seoeun25 added a commit to seoeun25/incubator-druid that referenced this pull request Jan 10, 2020
* Kafka Index Task that supports Incremental handoffs apache#4815

* prevent NPE from supressing actual exception (apache#5146)

* prevent npe on mismatch between number of kafka partitions and task count (apache#5139)

* Throw away rows with timestamps beyond long bounds in kafka indexing (apache#5215) (apache#5232)

* Fix state check bug in Kafka Index Task (apache#5204) (apache#5248)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants