-
Notifications
You must be signed in to change notification settings - Fork 630
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consumer connection closed / client reconnect #979
Comments
Take a look at the kafka broker logs maybe it can provide some clues as to why it's disconnecting. |
I have a very similar issue. I know there is kafka available at
|
Looks like your kafka broker isn't setup correctly. Kafka node will connect to the broker and pull the metadata of other brokers and connect to them as necessary. I believe the broker metadata coming from the broker is referring to 127.0.0.1 in this case. |
I had an issue where the log says as follows. My kafka instance is part of the devicehive stack which is running in a docker container.
A temporary solution was to add docker container id 'e1a965baf098' to /etc/hosts file. This will let me proceed but a every redeployment of docker needs updating hosts file. |
@chan71 you should look into updating the advertised listeners to reference |
@hyperlink thanks for the suggestion. That helped to get the localhost added to the listeners. However, the following error is displayed when a consumer is invoked.
And this error when producer in invoked.
|
Sorry @hyperlink I must have misunderstood your suggestion. I added the following line to docker compose file.
While adding the localhost to kafka_adcertised_listeners has fixed the continuous attempt to retrieve metadata and closing connection, it threw LeaderNotAvailable after 10+ attempts to refresh metadata. It looks like that other references to hostname might have caused this error. So, I added the localhost to the kafka_advertised_host_name env variable and it fixed the issue.
Thanks for your help. |
I'm sorry if this is the wrong place to post this. I'm having an issue when running my tests in CI (centos). They work fine locally on my mac. Here's the output I'm getting:
kafka-node:KafkaClient Connect attempt 1 +0ms
kafka-node:KafkaClient Trying to connect to host: localhost port: 29092 +14ms
kafka-node:KafkaClient Sending versions request to localhost:29092 +139ms
kafka-node:KafkaClient broker socket connected {"host":"localhost","port":"29092"} +15ms
kafka-node:KafkaClient Received versions response from localhost:29092 +81ms
kafka-node:KafkaClient setting api support to {"produce":{"min":0,"max":2,"usable":2},"fetch":{"min":0,"max":3,"usable":2},"offset":{"min":0,"max":1,"usable":0},"metadata":{"min":0,"max":2,"usable":0},"leader":{"min":0,"max":0,"usable":false},"stopReplica":{"min":0,"max":0,"usable":false},"updateMetadata":{"min":0,"max":3,"usable":false},"controlledShutdown":{"min":1,"max":1,"usable":false},"offsetCommit":{"min":0,"max":2,"usable":2},"offsetFetch":{"min":0,"max":2,"usable":1},"groupCoordinator":{"min":0,"max":0,"usable":0},"joinGroup":{"min":0,"max":1,"usable":0},"heartbeat":{"min":0,"max":0,"usable":0},"leaveGroup":{"min":0,"max":0,"usable":0},"syncGroup":{"min":0,"max":0,"usable":0},"describeGroups":{"min":0,"max":0,"usable":0},"listGroups":{"min":0,"max":0,"usable":0},"saslHandshake":{"min":0,"max":0,"usable":false},"apiVersions":{"min":0,"max":0,"usable":0},"createTopics":{"min":0,"max":1,"usable":false},"deleteTopics":{"min":0,"max":0,"usable":false}} +1ms
kafka-node:KafkaClient updating metadatas +138ms
READY! undefined
kafka-node:KafkaClient updating metadatas +3s
kafka-node:KafkaClient checking payload topic/partitions has leaders +2ms
kafka-node:KafkaClient found leaders for all +0ms
kafka-node:KafkaClient grouped requests by 1 brokers ["1"] +1ms
kafka-node:KafkaClient missing apiSupport waiting until broker is ready... +62ms
kafka-node:Consumer connection closed +44ms
kafka-node:KafkaClient kafka-node-client reconnecting to kafka:29092 +1s
kafka-node:Consumer connection closed +68ms
kafka-node:KafkaClient kafka-node-client reconnecting to kafka:29092 +1s
kafka-node:Consumer connection closed +55ms
...That KafkaClient reconnect followed by Consumer connection closed over and over.
Any ideas on how I can troubleshoot this?
The text was updated successfully, but these errors were encountered: