New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
./rdkafka_example -P -t test1 -b localhost:9092 -X topic.metadata.refresh.interval.ms="-1" does not work #1149
Comments
Thanks for reporting this, there was a bug that caused the cache expiry to be based on the metadata refresh interval (*3) even if it was disabled. |
Can you update and verify the fix please? |
thanks a lot @edenhill . but I'd like to disable all MetadataRequest instead of after 15minutes even with your fix? How can I do it? And I'd also like not to automatically create topic? how can I do it? Because librdkafka may fail if one of 3 brokers is down and all brokers are configured with 3 replication factors. So I think maybe we don't need to refresh it. I just used librdkafka to send msg instead of creating topic which can be handled manually. Thanks, |
If you don't want topics auto-created you should disable auto.create.topics.enable on the broker, that is your safest bet. Even with replicas, librdkafka will need to perform a Metadata request to find out what the new leader is when the current leader goes down. |
…ms is disabled (closes confluentinc#1149)
Description
I want to disable metadata refresh, from CONFIGURATION.md, set topic.metadata.refresh.interval.ms can disable it. But when I set it, metadata is sending out each second.
./rdkafka_example -P -t test1 -b localhost:9092 -X topic.metadata.refresh.interval.ms="-1"
But ./rdkafka_example -P -t test1 -b localhost:9092 -X topic.metadata.refresh.interval.ms=10000 does work well
How to reproduce
Above cmd can always reproduce it
Checklist
Please provide the following information:
librdkafka-0.9.4.x
kafka_2.11-0.10.1.1
default
4.8.11-1.el7.elrepo.x86_64
debug=..
as necessary) from librdkafkaThe text was updated successfully, but these errors were encountered: