Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Producer and consumer optimizations #73

Merged
merged 1 commit into from
Apr 12, 2019

Conversation

fbeltrao
Copy link
Contributor

@fbeltrao fbeltrao commented Apr 12, 2019

Following changes have been made

  • Default value type is byte[] from string, converter byte[] -> string has been updated. Goal is to offer best performance by default and let consumer deserialize at function level (Avro and Protobuf deserialization still supported)
  • Trigger and output attribute have same constructor signature (brokerList, topic)
  • Using a single librdkafka producer per configuration. Serialization for IAsyncCollector is added using DependentProducerBuilder
  • Using producer async in output binding. The previous usage of Flush was blocking the thread
  • Cleaned up the KafkaListener, passing deserialisers as parameters (removed KafkaListenerAvro and Protobuf)
  • Added asynchronous commit strategy, saving offsets in background every 200ms (after a few tests the closest accuracy comparing to previous implementation).
  • Added tests for KafkaProducerFactory and KafkaListenerFactory

Fixes: #72, #71, #70

@ryancrawcour ryancrawcour merged commit c630ce3 into master Apr 12, 2019
@fbeltrao fbeltrao deleted the fbeltrao/producer-consumer-optimization branch April 14, 2019 15:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Verify checkpoint saving strategy
2 participants