Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KAFKA-5157: Options for handling corrupt data during deserialization #3423

Closed
wants to merge 11 commits into from
Closed

KAFKA-5157: Options for handling corrupt data during deserialization #3423

wants to merge 11 commits into from

Conversation

enothereska
Copy link
Contributor

@enothereska
Copy link
Contributor Author

More tests coming, but basic structure should be in place.

@asfgit
Copy link

asfgit commented Jun 23, 2017

Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/5648/
Test PASSed (JDK 8 and Scala 2.12).

@asfgit
Copy link

asfgit commented Jun 23, 2017

Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk7-scala2.11/5662/
Test PASSed (JDK 7 and Scala 2.11).

@enothereska enothereska changed the title KAFKA-5157: Options for handling corrupt data during deserialization [WiP] KAFKA-5157: Options for handling corrupt data during deserialization Jun 23, 2017
@enothereska
Copy link
Contributor Author

@mjsax @dguy @bbejeck @guozhangwang this is the implementation for KIP-161. Have a look when you can. Thanks.

@asfgit
Copy link

asfgit commented Jun 23, 2017

Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk7-scala2.11/5667/
Test PASSed (JDK 7 and Scala 2.11).

@asfgit
Copy link

asfgit commented Jun 23, 2017

Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/5653/
Test PASSed (JDK 8 and Scala 2.12).

Copy link
Member

@mjsax mjsax left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the patch. Some minor comments.

@@ -239,6 +241,13 @@
private static final String STATE_DIR_DOC = "Directory location for state store.";

/**
* {@code default.deserialization.exception.handler}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: Can you please add this in alphabetic order. Thx.

@@ -339,6 +348,11 @@
CommonClientConfigs.DEFAULT_SECURITY_PROTOCOL,
Importance.MEDIUM,
CommonClientConfigs.SECURITY_PROTOCOL_DOC)
.define(DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit. As above. Alphabetic within the "medium" group.

@@ -339,6 +348,11 @@
CommonClientConfigs.DEFAULT_SECURITY_PROTOCOL,
Importance.MEDIUM,
CommonClientConfigs.SECURITY_PROTOCOL_DOC)
.define(DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
Type.CLASS,
LogAndFailExceptionHandler.class.getName(),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: remove getName()

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They all have getName. Should I remove all? Otherwise it will look inconsistent.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good to me. Not sure why we started with this annoying .getName() in the first place...

private static final Logger log = LoggerFactory.getLogger(StreamThread.class);

@Override
public DeserializationHandlerResponse handle(ProcessorContext context,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: add final

}

@Override
public void configure(Map<String, ?> configs) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

final

@@ -129,6 +135,55 @@ public void shouldProcessRecordsForOtherTopic() throws Exception {
assertEquals(0, sourceOne.numReceived);
}

@Test(expected = StreamsException.class)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should not use expected here but try { globalStateTask.update(...); fail(...): } catch(ignore) {} pattern. Also, the ConsumerRecord() should be instantiated outside/before the try-catch blog.

}


@Test(expected = StreamsException.class)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as above

@Test(expected = StreamsException.class)
public void shouldThrowOnNegativeTimestamp() {
final List<ConsumerRecord<byte[], byte[]>> records = Collections.singletonList(
new ConsumerRecord<>("topic", 1, 1, -1L, TimestampType.CREATE_TIME, 0L, 0, 0, recordKey, recordValue));

final RecordQueue queue = new RecordQueue(new TopicPartition(topics[0], 1),
new MockSourceNode<>(topics, intDeserializer, intDeserializer),
new FailOnInvalidTimestamp());
new FailOnInvalidTimestamp(), new LogAndContinueExceptionHandler(), null);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: please one argument per line.

@@ -158,7 +189,7 @@ public void shouldDropOnNegativeTimestamp() {

final RecordQueue queue = new RecordQueue(new TopicPartition(topics[0], 1),
new MockSourceNode<>(topics, intDeserializer, intDeserializer),
new LogAndSkipOnInvalidTimestamp());
new LogAndSkipOnInvalidTimestamp(), new LogAndContinueExceptionHandler(), null);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as above.

@@ -205,7 +206,7 @@ public ProcessorTopologyTestDriver(final StreamsConfig config,
final GlobalStateManagerImpl stateManager = new GlobalStateManagerImpl(globalTopology, globalConsumer, stateDirectory);
globalStateTask = new GlobalStateUpdateTask(globalTopology,
new GlobalProcessorContextImpl(config, stateManager, streamsMetrics, cache),
stateManager
stateManager, new LogAndContinueExceptionHandler()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as above

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Guess you missed this one :)

@asfgit
Copy link

asfgit commented Jun 26, 2017

Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk7-scala2.11/5705/
Test PASSed (JDK 7 and Scala 2.11).

@asfgit
Copy link

asfgit commented Jun 26, 2017

Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/5691/
Test PASSed (JDK 8 and Scala 2.12).

@asfgit
Copy link

asfgit commented Jun 26, 2017

Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk7-scala2.11/5707/
Test PASSed (JDK 7 and Scala 2.11).

@asfgit
Copy link

asfgit commented Jun 26, 2017

Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/5693/
Test PASSed (JDK 8 and Scala 2.12).

@enothereska
Copy link
Contributor Author

@dguy @guozhangwang any further comments? Thanks.

Copy link
Contributor

@dguy dguy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @enothereska - i've left a few comments

final Exception exception) {

log.warn("Deserialization exception {}. Processor context is {} and record is {}",
exception.toString(), context, record);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we want the exception stacktrace?

final ConsumerRecord<byte[], byte[]> record,
final Exception exception) {

log.warn("Deserialization exception {}. Processor context is {} and record is {}",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here

} catch (Exception e) {
DeserializationExceptionHandler.DeserializationHandlerResponse response =
deserializationExceptionHandler.handle(processorContext, rawRecord, e);
if (response.id == DeserializationExceptionHandler.DeserializationHandlerResponse.FAIL.id) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not just if(response == DeserializationExceptionHandler.DeserializationHandlerResponse.FAIL) ?

try {
return deserialize(rawRecord);
} catch (Exception e) {
DeserializationExceptionHandler.DeserializationHandlerResponse response =
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: final

// catch and process if we have a deserialization handler
try {
return deserialize(rawRecord);
} catch (Exception e) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we add a metric for records skipped due to deserialization errors?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes good idea. I'll piggyback on this KIP.

final byte[] key,
final byte[] recordValue,
boolean failExpected) {
ConsumerRecord record = new ConsumerRecord<>("t2", 1, 1,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: final

@@ -140,14 +142,45 @@ public void shouldThrowStreamsExceptionWhenValueDeserializationFails() throws Ex
queue.addRawRecords(records);
}

@Test
public void shouldNotThrowStreamsExceptionWhenKeyDeserializationFailsWithSkipHandler() throws Exception {
RecordQueue queue = new RecordQueue(new TopicPartition(topics[0], 1),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe extract to a field as it is also used in the test below

public ConsumerRecord<Object, Object> tryDeserialize(final ProcessorContext processorContext,
ConsumerRecord<byte[], byte[]> rawRecord) {

if (deserializationExceptionHandler == null) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we assume and/or ensure that this is non-null? Then we won't need this check here and can just have a single path through this method. I think we default to LogAndFail... if nothing is set - right?

@asfgit
Copy link

asfgit commented Jul 4, 2017

Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk7-scala2.11/5887/
Test PASSed (JDK 7 and Scala 2.11).

@enothereska
Copy link
Contributor Author

Thanks @dguy

@asfgit
Copy link

asfgit commented Jul 4, 2017

Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk7-scala2.11/5889/
Test PASSed (JDK 7 and Scala 2.11).

@asfgit
Copy link

asfgit commented Jul 4, 2017

Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/5874/
Test PASSed (JDK 8 and Scala 2.12).

Copy link
Contributor

@dguy dguy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @enothereska. I've left one comment. I think the only other thing remaining is whether or not we should implement the pattern for setting state that @guozhangwang suggested

public DeserializationHandlerResponse handle(final ProcessorContext context,
final ConsumerRecord<byte[], byte[]> record,
final Exception exception) {
StringWriter sWriter = new StringWriter();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can just pass the exception as the last parameter to i.e, log.warn(" p1: {} p2: {}", p1, p2, exception)

Also, just realised that ProcessorContextImpl doesn't have a toString. Perhaps we just need to log the taskId, topic, partition, and offset? i.e.,
log.warn("Exception caught during Deserialization, taskId: {}, topic:{}, partition:{}, offset:{}", context.taskId(), record.topic(), record.partition(), record.offset(), exception)

Same in the other exception handler

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I pass just the exception, it reverts to exception.toString(). It won't print the stack trace. As a typical NPE example:

  • with just exception passed: java.lang.NullPointerException
  • with stack trace:```
    java.lang.NullPointerException
    at org.rocksdb.RocksDB.get(RocksDB.java:791)
    at org.apache.kafka.streams.state.internals.RocksDBStore.getInternal(RocksDBStore.java:235)
    at org.apache.kafka.streams.state.internals.RocksDBStore.get(RocksDBStore.java:219)
    at org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStore.delete(ChangeLoggingKeyValueBytesStore.java:80)
    at org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueStore.delete(ChangeLoggingKeyValueStore.java:96)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shall we add a toString to ProcessorContextImpl? That sounds useful.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@enothereska

If I pass just the exception, it reverts to exception.toString(). It won't print the stack trace.

My guess is that you are adding a {} for the exception, in which case it probably will just print the string. You don't need to add it. For example, from another one we have:
log.warn("{} Failed offset commits {}: ", logPrefix, consumedOffsetsAndMetadata, e)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Having toString on ProcessorContextImpl might be useful, but it also might be too much information for this particular log message

@enothereska
Copy link
Contributor Author

enothereska commented Jul 4, 2017

I've left one comment. I think the only other thing remaining is whether or not we should implement the pattern for setting state.

Thanks @dguy I'm not sure what this refers to. I looked back but I don't see @guozhangwang 's comment. Could you elaborate? Thanks.

@dguy
Copy link
Contributor

dguy commented Jul 4, 2017

@enothereska doh! I've got my PRs crossed! ignore me

@asfgit
Copy link

asfgit commented Jul 5, 2017

Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk7-scala2.11/5899/
Test PASSed (JDK 7 and Scala 2.11).

Copy link
Contributor

@dguy dguy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 more comment otherwise LGTM. @mjsax can you please make another pass over it?

@@ -80,8 +88,14 @@ public TopicPartition partition() {
* @return the size of this queue
*/
public int addRawRecords(Iterable<ConsumerRecord<byte[], byte[]>> rawRecords) {
ConsumerRecord<Object, Object> record = null;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see any reason for this variable to not be declared inside the loop as it was previously

@asfgit
Copy link

asfgit commented Jul 5, 2017

Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/5884/
Test PASSed (JDK 8 and Scala 2.12).

@asfgit
Copy link

asfgit commented Jul 5, 2017

Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk7-scala2.11/5903/
Test PASSed (JDK 7 and Scala 2.11).

@asfgit
Copy link

asfgit commented Jul 5, 2017

Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/5888/
Test PASSed (JDK 8 and Scala 2.12).

Copy link
Member

@mjsax mjsax left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some comments.

import org.apache.kafka.common.Configurable;
import org.apache.kafka.streams.processor.ProcessorContext;

public interface DeserializationExceptionHandler extends Configurable {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should have proper class JavaDocs for all public interfaces.


public interface DeserializationExceptionHandler extends Configurable {
/**
* Inspect a record and the exception received
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: . missing at the end

public final short id;

DeserializationHandlerResponse(int id, String name) {
this.id = (short) id;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

iIf we go with short, the parameter should be short, too?

/** the permanent and immutable id of an API--this can't change ever */
public final short id;

DeserializationHandlerResponse(int id, String name) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: add final

final Exception exception) {

log.warn("Exception caught during Deserialization, " +
"taskId: {}, topic:{}, partition:{}, offset:{}",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: some whitespaces are missing

final Exception exception) {

log.warn("Exception caught during Deserialization, " +
"taskId: {}, topic:{}, partition:{}, offset:{}",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: fix indention and some whitespaces are missing

@@ -63,7 +56,7 @@ public GlobalStateUpdateTask(final ProcessorTopology topology,
for (final String storeName : storeNames) {
final String sourceTopic = storeNameToTopic.get(storeName);
final SourceNode source = topology.source(sourceTopic);
deserializers.put(sourceTopic, new SourceNodeAndDeserializer(source, new SourceNodeRecordDeserializer(source)));
deserializers.put(sourceTopic, new SourceNodeRecordDeserializer(source, this.deserializationExceptionHandler));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: remove this

final DeserializationExceptionHandler.DeserializationHandlerResponse response =
deserializationExceptionHandler.handle(processorContext, rawRecord, e);
if (response == DeserializationExceptionHandler.DeserializationHandlerResponse.FAIL) {
throw e;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we wrap this with a StreamsException? This way, we could add a detailed error message and explain in the msg text that uses can set a different exception handler etc (similar to timestamp extractor?) -- so users don't need to ask at the mailing list :)

@@ -205,7 +206,7 @@ public ProcessorTopologyTestDriver(final StreamsConfig config,
final GlobalStateManagerImpl stateManager = new GlobalStateManagerImpl(globalTopology, globalConsumer, stateDirectory);
globalStateTask = new GlobalStateUpdateTask(globalTopology,
new GlobalProcessorContextImpl(config, stateManager, streamsMetrics, cache),
stateManager
stateManager, new LogAndContinueExceptionHandler()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Guess you missed this one :)

@@ -186,6 +187,8 @@ public NodeMetrics(StreamsMetrics metrics, String name, String sensorNamePrefix)
this.nodeCreationSensor = metrics.addLatencyAndThroughputSensor(scope, sensorNamePrefix + "." + name, "create", Sensor.RecordingLevel.DEBUG, tagKey, tagValue);
this.nodeDestructionSensor = metrics.addLatencyAndThroughputSensor(scope, sensorNamePrefix + "." + name, "destroy", Sensor.RecordingLevel.DEBUG, tagKey, tagValue);
this.sourceNodeForwardSensor = metrics.addThroughputSensor(scope, sensorNamePrefix + "." + name, "forward", Sensor.RecordingLevel.DEBUG, tagKey, tagValue);
this.sourceNodeSkippedDueToDeserializationError = metrics.addThroughputSensor(scope, sensorNamePrefix + "." + name, "skippedDueToDeserializationError", Sensor.RecordingLevel.DEBUG, tagKey, tagValue);

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: remove empty line

@asfgit
Copy link

asfgit commented Jul 6, 2017

Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/5904/
Test PASSed (JDK 8 and Scala 2.12).

@asfgit
Copy link

asfgit commented Jul 6, 2017

Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk7-scala2.11/5919/
Test PASSed (JDK 7 and Scala 2.11).

@asfgit
Copy link

asfgit commented Jul 7, 2017

Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/5943/
Test FAILed (JDK 8 and Scala 2.12).

@asfgit
Copy link

asfgit commented Jul 7, 2017

Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk7-scala2.11/5958/
Test FAILed (JDK 7 and Scala 2.11).

@enothereska
Copy link
Contributor Author

Could not write standard input into: Gradle build daemon

@enothereska
Copy link
Contributor Author

retest this please

@asfgit
Copy link

asfgit commented Jul 7, 2017

Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk7-scala2.11/5959/
Test FAILed (JDK 7 and Scala 2.11).

@enothereska
Copy link
Contributor Author

org.apache.kafka.streams.integration.QueryableStateIntegrationTest.shouldAllowToQueryAfterThreadDied failed. There is already a JIRA for it https://issues.apache.org/jira/browse/KAFKA-5566.

@asfgit
Copy link

asfgit commented Jul 7, 2017

Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/5944/
Test FAILed (JDK 8 and Scala 2.12).

@enothereska
Copy link
Contributor Author

enothereska commented Jul 7, 2017

@dguy I don't have anything else to add to this. Fixing the QueryableState in separate PR. Thanks. Also I just merged with trunk.

@asfgit
Copy link

asfgit commented Jul 7, 2017

Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk7-scala2.11/5962/
Test PASSed (JDK 7 and Scala 2.11).

@asfgit
Copy link

asfgit commented Jul 7, 2017

Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/5947/
Test PASSed (JDK 8 and Scala 2.12).

@enothereska
Copy link
Contributor Author

@guozhangwang any more comments?

@dguy
Copy link
Contributor

dguy commented Jul 10, 2017

retest this please

@asfgit
Copy link

asfgit commented Jul 10, 2017

Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk7-scala2.11/6015/
Test PASSed (JDK 7 and Scala 2.11).

@asfgit
Copy link

asfgit commented Jul 10, 2017

Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/6000/
Test PASSed (JDK 8 and Scala 2.12).

dguy
dguy approved these changes Jul 10, 2017
Copy link
Contributor

@dguy dguy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @enothereska, LGTM

@asfgit asfgit closed this in a1f97c8 Jul 10, 2017
Copy link
Contributor

@guozhangwang guozhangwang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left some comments on this PR. Sorry for the late reply.

sourceNode.nodeMetrics.sourceNodeSkippedDueToDeserializationError.record();
}
}
return null;
Copy link
Contributor