New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add 0.9 Kafka binder #2
Comments
From @mbogoevici on February 24, 2016 18:2 We need to discuss backwards compatibility options for 0.8 too I believe. |
Yeah, maybe we would be fine with existing tests, but chances are @garyrussell would want to add new tests, too. The backward compatibility is a whole different story though. Gary's thoughts below.
|
From @mbogoevici on February 24, 2016 18:13 Just to clarify: I do not feel we should support both in the same release. But there should be an option for users that are on 0.8 - even if that means moving out the 0.8 version post 1.1 (when it is the most likely we'll upgrade to 0.9). |
From @mbogoevici on February 24, 2016 18:17 And for 1.0 we can provide a separate binder 0.9 implementation that could be developed between the RC and GA timeline for 1.0 (also more lenient for dependencies, e.g using Mx releases of Spring Integration Kafka, which is typically a non-starter for RC releases). |
From @cdupuis on May 24, 2016 10:2 @mbogoevici and @garyrussell what are the plans for this? Do you guys have any idea on timing on this? Thanks, cd |
From @mbogoevici on May 24, 2016 10:39 @cdupuis: Based on the current progress of Spring Kafka and Spring Integration Kafka 0.9, we will be able to provide a milestone that supports it in the next weeks with the goal of aligning Kafka 0.9 support in SCSt with the Spring Cloud Data Flow 1.0 release. Note that the current implementation still supports interacting with 0.9 brokers, but it uses the simple consumer API of 0.8.x (which is still compatible). The major goal of this is to get advantage of the new 0.9 client features, and especially of rebalancing and dynamic scaling of clients. |
From @mbogoevici on May 24, 2016 10:51 To wit, the only blocker of some sorts is spring-projects/spring-kafka#84 and we are positive that it will be solved soon. |
From @cdupuis on May 25, 2016 15:38 Thanks @mbogoevici. |
Closed via #13 |
@sabbyanandan According to spring data flow latest documentation: Reference: Great work it's an awesome integration! |
Hi, @Kenvelo90:
If you're building standalone Spring Cloud Stream (SCSt) applications, yes, you could and that is supported. In Spring Cloud Data Flow, there's no tight coupling of SCSt except of course the applications built with SCSt are registered and used in the stream definitions. That said, we are in the process of upgrading OOTB applications with the latest SCSt 1.1.0.RELEASE and when that is done, we will update the docs. In the meantime, you can build and register custom SCSt applications with the latest Kafka binder to take advantage of the auto-balancing capabilities. |
@sabbyanandan Great, Thank You! |
* Offset commit when DLQ is enabled and manual ack Resolves #870 When an error occurs, if the application uses manual acknowldegment (i.e. autoCommitOffset is false) and DLQ is enabled, then after publishing to DLQ, the offset is not committed currently. Addressing this issue by manually commiting after publishing to DLQ. * Address PR review comments * Addressing PR review comments - #2
* Offset commit when DLQ is enabled and manual ack Resolves #870 When an error occurs, if the application uses manual acknowldegment (i.e. autoCommitOffset is false) and DLQ is enabled, then after publishing to DLQ, the offset is not committed currently. Addressing this issue by manually commiting after publishing to DLQ. * Address PR review comments * Addressing PR review comments - #2
From @sabbyanandan on February 24, 2016 17:51
As a developer, I'd like to continue the work from #340, so I can adapt the existing SCSt-Kafka binder with 0.9 release of Apache Kafka.
Acceptance:
pom.xml
ticktock
stream with Kafka 0.9 running as messaging middlewareCopied from original issue: spring-cloud/spring-cloud-stream#373
The text was updated successfully, but these errors were encountered: