Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add 0.9 Kafka binder #2

Closed
sabbyanandan opened this issue Jun 21, 2016 · 12 comments
Closed

Add 0.9 Kafka binder #2

sabbyanandan opened this issue Jun 21, 2016 · 12 comments
Assignees
Milestone

Comments

@sabbyanandan
Copy link
Contributor

sabbyanandan commented Jun 21, 2016

From @sabbyanandan on February 24, 2016 17:51

As a developer, I'd like to continue the work from #340, so I can adapt the existing SCSt-Kafka binder with 0.9 release of Apache Kafka.

Acceptance:

  • SCSt-Kafka binder supports 0.9 release of Kafka
  • 0.9 is the default version in pom.xml
  • I can build SCSM applications with SCSt-kafka as the binder
  • I can run ticktock stream with Kafka 0.9 running as messaging middleware
  • Unit tests included
  • Docs updated

Copied from original issue: spring-cloud/spring-cloud-stream#373

@sabbyanandan
Copy link
Contributor Author

From @mbogoevici on February 24, 2016 18:2

We need to discuss backwards compatibility options for 0.8 too I believe.

@sabbyanandan
Copy link
Contributor Author

Yeah, maybe we would be fine with existing tests, but chances are @garyrussell would want to add new tests, too.

The backward compatibility is a whole different story though. Gary's thoughts below.

Gary Russell [9:47 AM] 
But I think Marius is planning to create the binder as a new project so there could be two binders; but I don’t know how long he would want to support that
Gary Russell [9:47 AM] 
The 0.9 client is sooooo much simpler, it makes no sense (from an SI perspective) to support both.

@sabbyanandan
Copy link
Contributor Author

From @mbogoevici on February 24, 2016 18:13

Just to clarify: I do not feel we should support both in the same release. But there should be an option for users that are on 0.8 - even if that means moving out the 0.8 version post 1.1 (when it is the most likely we'll upgrade to 0.9).

@sabbyanandan
Copy link
Contributor Author

From @mbogoevici on February 24, 2016 18:17

And for 1.0 we can provide a separate binder 0.9 implementation that could be developed between the RC and GA timeline for 1.0 (also more lenient for dependencies, e.g using Mx releases of Spring Integration Kafka, which is typically a non-starter for RC releases).

@sabbyanandan
Copy link
Contributor Author

From @cdupuis on May 24, 2016 10:2

@mbogoevici and @garyrussell what are the plans for this? Do you guys have any idea on timing on this?

Thanks, cd

@sabbyanandan
Copy link
Contributor Author

From @mbogoevici on May 24, 2016 10:39

@cdupuis: Based on the current progress of Spring Kafka and Spring Integration Kafka 0.9, we will be able to provide a milestone that supports it in the next weeks with the goal of aligning Kafka 0.9 support in SCSt with the Spring Cloud Data Flow 1.0 release.

Note that the current implementation still supports interacting with 0.9 brokers, but it uses the simple consumer API of 0.8.x (which is still compatible). The major goal of this is to get advantage of the new 0.9 client features, and especially of rebalancing and dynamic scaling of clients.

@sabbyanandan
Copy link
Contributor Author

From @mbogoevici on May 24, 2016 10:51

To wit, the only blocker of some sorts is spring-projects/spring-kafka#84 and we are positive that it will be solved soon.

@sabbyanandan
Copy link
Contributor Author

From @cdupuis on May 25, 2016 15:38

Thanks @mbogoevici.

@mbogoevici
Copy link
Contributor

Closed via #13

@Kenvelo90
Copy link

@sabbyanandan
Does it mean that we can dynamically add Kafka consumers in v1.1.0.RELEASE?

According to spring data flow latest documentation:
"Currently, this is not supported with the Kafka binder (based on the 0.8 simple consumer at the time of the release), as well as partitioned streams, for which the suggested workaround is redeploying the stream with an updated number of instances. Both cases require a static consumer set up based on information about the total instance count and current instance index, a limitation intended to be addressed in future releases. For example, Kafka 0.9 and higher provides good infrastructure for scaling applications dynamically and will be available as an alternative to the current Kafka 0.8 based binder in the near future. One specific concern regarding scaling partitioned streams is the handling of local state, which is typically reshuffled as the number of instances is changed. This is also intended to be addressed in the future versions, by providing first class support for local state management."

Reference:
http://docs.spring.io/spring-cloud-dataflow/docs/current/reference/htmlsingle/#arch-runtime-scaling

Great work it's an awesome integration!

@sabbyanandan
Copy link
Contributor Author

Hi, @Kenvelo90:

Does it mean that we can dynamically add Kafka consumers in v1.1.0.RELEASE?

If you're building standalone Spring Cloud Stream (SCSt) applications, yes, you could and that is supported.

In Spring Cloud Data Flow, there's no tight coupling of SCSt except of course the applications built with SCSt are registered and used in the stream definitions. That said, we are in the process of upgrading OOTB applications with the latest SCSt 1.1.0.RELEASE and when that is done, we will update the docs. In the meantime, you can build and register custom SCSt applications with the latest Kafka binder to take advantage of the auto-balancing capabilities.

@Kenvelo90
Copy link

@sabbyanandan Great, Thank You!

sobychacko added a commit to sobychacko/spring-cloud-stream-binder-kafka that referenced this issue Mar 24, 2020
garyrussell pushed a commit that referenced this issue Mar 24, 2020
* Offset commit when DLQ is enabled and manual ack

Resolves #870

When an error occurs, if the application uses manual acknowldegment
(i.e. autoCommitOffset is false) and DLQ is enabled, then after
publishing to DLQ, the offset is not committed currently.
Addressing this issue by manually commiting after publishing to DLQ.

* Address PR review comments

* Addressing PR review comments - #2
sobychacko added a commit that referenced this issue Mar 25, 2020
* Offset commit when DLQ is enabled and manual ack

Resolves #870

When an error occurs, if the application uses manual acknowldegment
(i.e. autoCommitOffset is false) and DLQ is enabled, then after
publishing to DLQ, the offset is not committed currently.
Addressing this issue by manually commiting after publishing to DLQ.

* Address PR review comments

* Addressing PR review comments - #2
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants