-
Notifications
You must be signed in to change notification settings - Fork 13k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FLINK-2386] Rework Kafka consumer for Flink #1028
Conversation
…ption handling and code simplication - The exceptions are no longer logged by the operators themselves. Operators perform only cleanup in reaction to exceptions. Exceptions are reported only the the root Task object, which knows whether this is the first failure-causing exception (root cause), or is a subsequent exception, or whether the task was actually canceled already. In the later case, exceptions are ignored, because many cancellations lead to meaningless exceptions. - more exception in signatures, less wrapping where not needed - Core resource acquisition/release logic is in one streaming task, reducing code duplication - Guaranteed cleanup of output buffer and input buffer resources (formerly missed when other exceptions where encountered) - Fix mixup in instantiation of source contexts in the stream source task - Auto watermark generators correctly shut down their interval scheduler - Improve use of generics, got rid of many raw types This closes apache#1017
…nting has happened before a failure.
…alizationSchema prominent, add tests.
…ources and move them to 'org.apache.flink.kafka_backport'
How about dropping the backported Kafka code and relying completely on our own implementation against the SimpleConsumer API? |
I like this idea a lot. The backported code is not very stable anyways... |
I've opened a pull request with the code removed against your branch: StephanEwen#14 |
So is the #1039 depends on this one? |
Since it is rebased to #1039 already, could this one be closed and do review on that one instead? |
Yes, I think we can close this issue. |
Subsumed by #1039 |
This is a reworked and extended version of #996 . It also build on top of #1017
It improves the Kafka consumer, fixes bugs, and offers pluggable fetcher and offset committer to make it work across Kafka versions from 0.8.1 to 0.8.3 (upcoming).
Functionality
Tests
This pull request includes a set of new thorough test for the Kafka consumer
Limitations
The code based on the low-level consumer seems to work well.
The high-level consumer does not work with very large records It looks like a problem in the backported Kafka code, but it is not 100% sure.
Debug Code
This pull request includes some debug code in the
BarrierBuffer
that I intend to remove. It is there to track possible cornercase problems in the checkpoint barrier handling.