Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GKE Kafkacat Error: Name or service not known #99

Closed
cquan808 opened this issue Nov 16, 2017 · 5 comments
Closed

GKE Kafkacat Error: Name or service not known #99

cquan808 opened this issue Nov 16, 2017 · 5 comments

Comments

@cquan808
Copy link

cquan808 commented Nov 16, 2017

I am currently trying this setup on GCE. Kubernetes cluster node internal IP address is the same as GCE VM and everything is tested on the default namespace.

I created the topic internally and can produce and consume messages internally. But when I am trying to consume a message externally via kafkacat, I am receiving a name or service not known error: (advertised.listener issue perhaps?)

on@kubernetes:~/kubernetes-kafka$ kafkacat -C -b [kubernetes-cluster-IP]:32400 -t kube
%3|1510858507.273|FAIL|rdkafka#consumer-0| kafka-0.broker.default.svc.cluster.local:9094/0: Failed to resolve 'kafka-0.broker.default.svc.cluster.local:9094': Name or service not known
%3|1510858507.273|ERROR|rdkafka#consumer-0| kafka-0.broker.default.svc.cluster.local:9094/0: Failed to resolve 'kafka-0.broker.default.svc.cluster.local:9094': Name or service not known

kafkacat is able to find the broker and list the topic:

on@kubernetes:~/kubernetes-kafka$ kafkacat -C -b [kubernetes-cluster-IP]:32400 -L
Metadata for all topics (from broker -1: [kubernetes-cluster-IP]:32400/bootstrap):
 1 brokers:
  broker 0 at kafka-0.broker.default.svc.cluster.local:9094
 2 topics:
  topic "kube" with 1 partitions:
    partition 0, leader 0, replicas: 0, isrs: 0
  topic "__consumer_offsets" with 50 partitions:

Using grep, i get:

on@kubernetes:~/kubernetes-kafka$ kubectl -n default logs kafka-0 | grep "Registered broker"
[2017-11-16 16:50:41,754] INFO Registered broker 0 at path /brokers/ids/0 with addresses: EndPoint(kafka-0.broker.default.svc.cluster.local,9094,ListenerName(OUTSIDE),PLAINTEXT),EndPoint(kafka-0.broker.default.svc.cluster.local,9092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.utils.ZkUtils)

Below is my server.properties from SSH into kafka-0 pod (everything from log basics was excluded)
I notice that from the init.sh file that only the broker.id was set, broker.rack failed but don't need it, and advertised.listener was not changed at all. I manually set advertised.listener to [kubernetes-cluster-IP]:32400

############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0

#init#broker.rack=# zone lookup failed, see -c init-config logs

############################# Socket Server Settings #############################

# The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
#listeners=PLAINTEXT://:9092
listeners=OUTSIDE://:9094,PLAINTEXT://:9092

# Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured.  Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092
advertised.listeners=OUTSIDE://[kubernetes-cluster-IP]:32400,PLAINTEXT://:9092

# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL,OUTSIDE:PLAINTEXT
inter.broker.listener.name=PLAINTEXT

# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3

# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8

# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600

Using the outside-0 service:

kind: Service
apiVersion: v1
metadata:
  name: outside-0
  namespace: default
spec:
  selector:
    app: kafka
    kafka-broker-id: "0"
  ports:
  - protocol: TCP
    targetPort: 9094
    port: 32400
    nodePort: 32400
  type: NodePort

Related to issue #13 @solsson
Any help would be much appreciated, thanks.

@solsson
Copy link
Contributor

solsson commented Nov 16, 2017

You seem to get the internal DNS name at bootstrap, but the "outside" port: Failed to resolve 'kafka-0.broker.default.svc.cluster.local:9094'. It's weird because you say that config, after init.sh, contains advertised.listeners=OUTSIDE://[kubernetes-cluster-IP]:32400.

Can you find the string kafka-0.broker.default.svc.cluster.local:9094 anywhere in config, or in debug messages from kafkacat? Run kafkacat with -d broker to see more info about the bootstrap flow.

@cquan808
Copy link
Author

cquan808 commented Nov 16, 2017

@solsson

You seem to get the internal DNS name at bootstrap, but the "outside" port: Failed to resolve 'kafka-0.broker.default.svc.cluster.local:9094'. It's weird because you say that config, after init.sh, contains advertised.listeners=OUTSIDE://[kubernetes-cluster-IP]:32400.

after running 10broker-config.yml which includes init.sh, it shows up as #init#advertised.listeners=OUTSIDE://#init#,PLAINTEXT://:9092. I just modified it afterwards to advertised.listeners=OUTSIDE://[kubernetes-cluster-IP]:32400 to see if it would work.

I also noticed in init.sh the line kubectl -n $POD_NAMESPACE label pod $POD_NAME kafka-broker-id=$KAFKA_BROKER_ID was never implemented and I labeled the kafka-0 pod manually.

Can you find the string kafka-0.broker.default.svc.cluster.local:9094 anywhere in config, or in debug messages from kafkacat? Run kafkacat with -d broker to see more info about the bootstrap flow.

By config, did you mean in server.properties file? kafka-0.broker.default.svc.cluster.local:9094 is not anywhere in that file but it did show up in grep as an endpoint shown in the output above.

kafkacat -d broker gave me an ERROR: Invalid value for configuration property "debug"
Not sure how else I can check the bootstrap flow as I am relatively new to kubernetes and kafka.

@cquan808
Copy link
Author

cquan808 commented Nov 17, 2017

Got kafkacat to produce and consume messages! kafkacat -d broker helped, thanks @solsson .

In the file 10broker-config.yml, I truncated init.sh to just:

  init.sh: |-
    #!/bin/bash
    set -x

    KAFKA_BROKER_ID=${HOSTNAME##*-}
    sed -i "s/#init#broker.id=#init#/broker.id=$KAFKA_BROKER_ID/" /etc/kafka/server.properties

and I used this line for advertised.listeners:
advertised.listeners=OUTSIDE://[kubernetes-cluster-IP]:32400,PLAINTEXT://:9092

Labeling the kafka-0 pod was done manually after deploying the statefulset.
kubectl -n $POD_NAMESPACE label pod $POD_NAME kafka-broker-id=$KAFKA_BROKER_ID

Athough I hardcoded most of it for 1 kafka pod and it is not automatically set up for multiple kafka pods yet, this is a good start if you plan to use GCE.

@bchhabra2490
Copy link

I am facing the same issue. I am using Kubernetes Statefulset to deploy kafka. Here is my Yaml file.

## Headless Service to create DNS
apiVersion: v1
kind: Service
metadata:
  name: broker
  namespace: kafka
spec:
  ports:
  - port: 9092
  # [podname].broker.kafka.svc.cluster.local
  clusterIP: None
  selector:
    app: opius-kafka
---
# Deploy Stateful Set
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: opius-kafka
  namespace: kafka
  labels:
    app: opius-kafka
spec:
  selector:
    matchLabels:
      app: opius-kafka
  serviceName: broker ## Name of the headless service
  replicas: 3
  podManagementPolicy: Parallel
  template:
    metadata:
      labels:
        app: opius-kafka
      annotations:
    spec:
      terminationGracePeriodSeconds: 30
      containers:
      - name: kafka
        image: hyperledger/fabric-kafka
        env:
        - name: KAFKA_ADVERTISED_PORT
          value: "9092"
        - name: KAFKA_ZOOKEEPER_CONNECT
          value: zk-svc:2181
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: standard
      resources:
        requests:
          storage: 10Gi
---
# Deploy Service
apiVersion: v1
kind: Service
metadata:
  name: kafka-svc
  namespace: kafka
  labels:
    app: opius-kafka-svc
spec:
  type: LoadBalancer
  ports:
  - name: kafka-server
    port: 9092
    protocol: TCP
  selector:
    app: opius-kafka

When I check my logs, kafka is working fine. Here are the config values:-

        advertised.host.name = null
        advertised.listeners = null
        advertised.port = 9092
        alter.config.policy.class.name = null
        authorizer.class.name = 
        auto.create.topics.enable = true
        auto.leader.rebalance.enable = true
        background.threads = 10
        broker.id = -1
        broker.id.generation.enable = true
        broker.rack = null
        compression.type = producer
        connections.max.idle.ms = 600000
        controlled.shutdown.enable = true
        controlled.shutdown.max.retries = 3
        controlled.shutdown.retry.backoff.ms = 5000
        controller.socket.timeout.ms = 30000
        create.topic.policy.class.name = null
        default.replication.factor = 1
        delete.records.purgatory.purge.interval.requests = 1
        delete.topic.enable = true
        fetch.purgatory.purge.interval.requests = 1000
        group.initial.rebalance.delay.ms = 0
        group.max.session.timeout.ms = 300000
        group.min.session.timeout.ms = 6000
        host.name = 
        inter.broker.listener.name = null
        inter.broker.protocol.version = 1.0-IV0
        leader.imbalance.check.interval.seconds = 300
        leader.imbalance.per.broker.percentage = 10
        listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
        listeners = null
        log.cleaner.backoff.ms = 15000
        log.cleaner.dedupe.buffer.size = 134217728
        log.cleaner.delete.retention.ms = 86400000
        log.cleaner.enable = true
        log.cleaner.io.buffer.load.factor = 0.9
        log.cleaner.io.buffer.size = 524288
        log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
        log.cleaner.min.cleanable.ratio = 0.5
        log.cleaner.min.compaction.lag.ms = 0
        log.cleaner.threads = 1
        log.cleanup.policy = [delete]
        log.dir = /tmp/kafka-logs
        log.dirs = /tmp/kafka-logs
        log.flush.interval.messages = 9223372036854775807
        log.flush.interval.ms = null
        log.flush.offset.checkpoint.interval.ms = 60000
        log.flush.scheduler.interval.ms = 9223372036854775807
        log.flush.start.offset.checkpoint.interval.ms = 60000
        log.index.interval.bytes = 4096
        log.index.size.max.bytes = 10485760
        log.message.format.version = 1.0-IV0
        log.message.timestamp.difference.max.ms = 9223372036854775807
        log.message.timestamp.type = CreateTime
        log.preallocate = false
        log.retention.bytes = -1
        log.retention.check.interval.ms = 300000
        log.retention.hours = 168
        log.retention.minutes = null
        log.retention.ms = -1
        log.roll.hours = 168
        log.roll.jitter.hours = 0
        log.roll.jitter.ms = null
        log.roll.ms = null
        log.segment.bytes = 1073741824
        log.segment.delete.delay.ms = 60000
        max.connections.per.ip = 2147483647
        max.connections.per.ip.overrides = 
        message.max.bytes = 1000012
        metric.reporters = []
        metrics.num.samples = 2
        metrics.recording.level = INFO
        metrics.sample.window.ms = 30000
        min.insync.replicas = 1
        num.io.threads = 8
        num.network.threads = 3
        num.partitions = 1
        num.recovery.threads.per.data.dir = 1
        num.replica.fetchers = 1
        offset.metadata.max.bytes = 4096
        offsets.commit.required.acks = -1
        offsets.commit.timeout.ms = 5000
        offsets.load.buffer.size = 5242880
        offsets.retention.check.interval.ms = 600000
        offsets.retention.minutes = 1440
        offsets.topic.compression.codec = 0
        offsets.topic.num.partitions = 50
        offsets.topic.replication.factor = 1
        offsets.topic.segment.bytes = 104857600
        port = 9092
        principal.builder.class = null
        producer.purgatory.purge.interval.requests = 1000
        queued.max.request.bytes = -1
        queued.max.requests = 500
        quota.consumer.default = 9223372036854775807
        quota.producer.default = 9223372036854775807
        quota.window.num = 11
        quota.window.size.seconds = 1
        replica.fetch.backoff.ms = 1000
        replica.fetch.max.bytes = 1048576
        replica.fetch.min.bytes = 1
        replica.fetch.response.max.bytes = 10485760
        replica.fetch.wait.max.ms = 500
        replica.high.watermark.checkpoint.interval.ms = 5000
        replica.lag.time.max.ms = 10000
        replica.socket.receive.buffer.bytes = 65536
        replica.socket.timeout.ms = 30000
        replication.quota.window.num = 11
        replication.quota.window.size.seconds = 1
        request.timeout.ms = 30000
        reserved.broker.max.id = 1000
        sasl.enabled.mechanisms = [GSSAPI]
        sasl.kerberos.kinit.cmd = /usr/bin/kinit
        sasl.kerberos.min.time.before.relogin = 60000
        sasl.kerberos.principal.to.local.rules = [DEFAULT]
        sasl.kerberos.service.name = null
        sasl.kerberos.ticket.renew.jitter = 0.05
        sasl.kerberos.ticket.renew.window.factor = 0.8
        sasl.mechanism.inter.broker.protocol = GSSAPI
        security.inter.broker.protocol = PLAINTEXT
        socket.receive.buffer.bytes = 102400
        socket.request.max.bytes = 104857600
        socket.send.buffer.bytes = 102400
        ssl.cipher.suites = null
        ssl.client.auth = none
        ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
        ssl.endpoint.identification.algorithm = null
        ssl.key.password = null
        ssl.keymanager.algorithm = SunX509
        ssl.keystore.location = null
        ssl.keystore.password = null
        ssl.keystore.type = JKS
        ssl.protocol = TLS
        ssl.provider = null
        ssl.secure.random.implementation = null
        ssl.trustmanager.algorithm = PKIX
        ssl.truststore.location = null
        ssl.truststore.password = null
        ssl.truststore.type = JKS
        transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
        transaction.max.timeout.ms = 900000
        transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
        transaction.state.log.load.buffer.size = 5242880
        transaction.state.log.min.isr = 1
        transaction.state.log.num.partitions = 50
        transaction.state.log.replication.factor = 1
        transaction.state.log.segment.bytes = 104857600
        transactional.id.expiration.ms = 604800000
        unclean.leader.election.enable = false
        zookeeper.connect = zk-svc:2181
        zookeeper.connection.timeout.ms = 6000
        zookeeper.session.timeout.ms = 6000
        zookeeper.set.acl = false
        zookeeper.sync.time.ms = 2000

When I run kafkacat -L -b <External-IP>:9092 from my machine, it works fine but when I try to run kafkacat -C -b <External-IP>:9092 -t test, it is giving me the error % ERROR: Local: Host resolution failure: opius-kafka-2.broker.kafka.svc.cluster.local:9092/1003: Failed to resolve 'opius-kafka-2.broker.kafka.svc.cluster.local:9092': nodename nor servname provided, or not known (after 5002ms in state CONNECT)

Maybe it is because of advertised.host.name = null advertised.listeners = null. Not sure though. But how to pass Advertised hostname and listeners through YAML file?

@bjlvfei
Copy link

bjlvfei commented Apr 28, 2020

I'm also face this kinds of issue when publish the topic outside kafkacat command line to kafka cluster of k8s

3 brokers:
broker 0 at dswgraylog-cp-kafka-0.dswgraylog-cp-kafka-headless.graylog-kafka:9092
broker 2 at dswgraylog-cp-kafka-2.dswgraylog-cp-kafka-headless.graylog-kafka:9092
broker 1 at dswgraylog-cp-kafka-1.dswgraylog-cp-kafka-headless.graylog-kafka:9092
6 topics:
topic "est" with 1 partitions:
partition 0, leader 2, replicas: 2, isrs: 2
topic "test" with 1 partitions:
partition 0, leader 0, replicas: 0, isrs: 0
topic "esbuat.sl.bluecloud.ibm.com-broker" with 1 partitions:
partition 0, leader 0, replicas: 0, isrs: 0
topic "__consumer_offsets" with 50 partitions:
partition 0, leader 2, replicas: 2,0,1, isrs: 0,2,1
partition 10, leader 0, replicas: 0,2,1, isrs: 0,2,1
partition 20, leader 1, replicas: 1,2,0, isrs: 0,2,1
partition 40, leader 0, replicas: 0,2,1, isrs: 0,2,1
partition 30, leader 2, replicas: 2,0,1, isrs: 0,2,1
partition 9, leader 2, replicas: 2,1,0, isrs: 0,2,1
partition 11, leader 1, replicas: 1,0,2, isrs: 0,2,1
partition 31, leader 0, replicas: 0,1,2, isrs: 0,2,1
partition 39, leader 2, replicas: 2,1,0, isrs: 0,2,1
partition 13, leader 0, replicas: 0,1,2, isrs: 0,2,1
partition 18, leader 2, replicas: 2,0,1, isrs: 0,2,1
partition 22, leader 0, replicas: 0,2,1, isrs: 0,2,1
partition 8, leader 1, replicas: 1,2,0, isrs: 0,2,1
partition 32, leader 1, replicas: 1,2,0, isrs: 0,2,1
partition 43, leader 0, replicas: 0,1,2, isrs: 0,2,1
partition 29, leader 1, replicas: 1,0,2, isrs: 0,2,1
partition 34, leader 0, replicas: 0,2,1, isrs: 0,2,1
partition 1, leader 0, replicas: 0,1,2, isrs: 0,2,1
partition 6, leader 2, replicas: 2,0,1, isrs: 0,2,1
partition 41, leader 1, replicas: 1,0,2, isrs: 0,2,1
partition 27, leader 2, replicas: 2,1,0, isrs: 0,2,1
partition 48, leader 2, replicas: 2,0,1, isrs: 0,2,1
partition 5, leader 1, replicas: 1,0,2, isrs: 0,2,1
partition 15, leader 2, replicas: 2,1,0, isrs: 0,2,1
partition 35, leader 1, replicas: 1,0,2, isrs: 0,2,1
partition 25, leader 0, replicas: 0,1,2, isrs: 0,2,1
partition 46, leader 0, replicas: 0,2,1, isrs: 0,2,1
partition 26, leader 1, replicas: 1,2,0, isrs: 0,2,1
partition 36, leader 2, replicas: 2,0,1, isrs: 0,2,1
partition 44, leader 1, replicas: 1,2,0, isrs: 0,2,1
partition 16, leader 0, replicas: 0,2,1, isrs: 0,2,1
partition 37, leader 0, replicas: 0,1,2, isrs: 0,2,1
partition 17, leader 1, replicas: 1,0,2, isrs: 0,2,1
partition 45, leader 2, replicas: 2,1,0, isrs: 0,2,1
partition 3, leader 2, replicas: 2,1,0, isrs: 0,2,1
partition 24, leader 2, replicas: 2,0,1, isrs: 0,2,1
partition 38, leader 1, replicas: 1,2,0, isrs: 0,2,1
partition 33, leader 2, replicas: 2,1,0, isrs: 0,2,1
partition 23, leader 1, replicas: 1,0,2, isrs: 0,2,1
partition 28, leader 0, replicas: 0,2,1, isrs: 0,2,1
partition 2, leader 1, replicas: 1,2,0, isrs: 0,2,1
partition 12, leader 2, replicas: 2,0,1, isrs: 0,2,1
partition 19, leader 0, replicas: 0,1,2, isrs: 0,2,1
partition 14, leader 1, replicas: 1,2,0, isrs: 0,2,1
partition 4, leader 0, replicas: 0,2,1, isrs: 0,2,1
partition 47, leader 1, replicas: 1,0,2, isrs: 0,2,1
partition 49, leader 0, replicas: 0,1,2, isrs: 0,2,1
partition 42, leader 2, replicas: 2,0,1, isrs: 0,2,1
partition 7, leader 0, replicas: 0,1,2, isrs: 0,2,1
partition 21, leader 2, replicas: 2,1,0, isrs: 0,2,1
topic "_confluent-metrics" with 12 partitions:
partition 0, leader 1, replicas: 1,0,2, isrs: 0,2,1
partition 5, leader 2, replicas: 2,0,1, isrs: 0,2,1
partition 10, leader 0, replicas: 0,1,2, isrs: 0,2,1
partition 2, leader 2, replicas: 2,1,0, isrs: 0,2,1
partition 8, leader 2, replicas: 2,1,0, isrs: 0,2,1
partition 9, leader 1, replicas: 1,2,0, isrs: 0,2,1
partition 11, leader 2, replicas: 2,0,1, isrs: 0,2,1
partition 4, leader 0, replicas: 0,1,2, isrs: 0,2,1
partition 1, leader 0, replicas: 0,2,1, isrs: 0,2,1
partition 6, leader 1, replicas: 1,0,2, isrs: 0,2,1
partition 7, leader 0, replicas: 0,2,1, isrs: 0,2,1
partition 3, leader 1, replicas: 1,2,0, isrs: 0,2,1
topic "__confluent.support.metrics" with 1 partitions:
partition 0, leader 0, replicas: 0, isrs: 0
%7|1588085137.457|DESTROY|rdkafka#producer-1| [thrd:app]: Terminating instance
%7|1588085137.457|DESTROY|rdkafka#producer-1| [thrd:main]: Destroy internal
%7|1588085137.457|DESTROY|rdkafka#producer-1| [thrd:main]: Removing all topics
%7|1588085137.457|TERMINATE|rdkafka#producer-1| [thrd:dswgraylog-cp-kafka-0.dswgraylog-cp-kafka-headless.graylog-kafk]: dswgraylog-cp-kafka-0.dswgraylog-cp-kafka-headless.graylog-kafka:9092/0: Handle is terminating: failed 0 request(s) in retry+outbuf
%7|1588085137.457|TERMINATE|rdkafka#producer-1| [thrd:dswgraylog-cp-kafka-2.dswgraylog-cp-kafka-headless.graylog-kafk]: dswgraylog-cp-kafka-2.dswgraylog-cp-kafka-headless.graylog-kafka:9092/2: Handle is terminating: failed 0 request(s) in retry+outbuf
%7|1588085137.457|TERMINATE|rdkafka#producer-1| [thrd:dswgraylog-cp-kafka-1.dswgraylog-cp-kafka-headless.graylog-kafk]: dswgraylog-cp-kafka-1.dswgraylog-cp-kafka-headless.graylog-kafka:9092/1: Handle is terminating: failed 0 request(s) in retry+outbuf
%7|1588085137.457|TERM|rdkafka#producer-1| [thrd::0/internal]: :0/internal: Received TERMINATE op in state UP: 1 refcnts, 0 toppar(s), 0 active toppar(s), 0 outbufs, 0 waitresps, 0 retrybufs
%7|1588085137.457|BROKERFAIL|rdkafka#producer-1| [thrd:dswgraylog-cp-kafka-1.dswgraylog-cp-kafka-headless.graylog-kafk]: dswgraylog-cp-kafka-1.dswgraylog-cp-kafka-headless.graylog-kafka:9092/1: failed: err: Local: Broker handle destroyed: (errno: Operation now in progress)
%7|1588085137.457|BROKERFAIL|rdkafka#producer-1| [thrd:dswgraylog-cp-kafka-0.dswgraylog-cp-kafka-headless.graylog-kafk]: dswgraylog-cp-kafka-0.dswgraylog-cp-kafka-headless.graylog-kafka:9092/0: failed: err: Local: Broker handle destroyed: (errno: Operation now in progress)
%7|1588085137.457|STATE|rdkafka#producer-1| [thrd:dswgraylog-cp-kafka-1.dswgraylog-cp-kafka-headless.graylog-kafk]: dswgraylog-cp-kafka-1.dswgraylog-cp-kafka-headless.graylog-kafka:9092/1: Broker changed state CONNECT -> DOWN
%7|1588085137.457|TERM|rdkafka#producer-1| [thrd:dswgraylog-cp-kafka-1.dswgraylog-cp-kafka-headless.graylog-kafk]: dswgraylog-cp-kafka-1.dswgraylog-cp-kafka-headless.graylog-kafka:9092/1: Received TERMINATE op in state DOWN: 1 refcnts, 0 toppar(s), 0 active toppar(s), 0 outbufs, 0 waitresps, 0 retrybufs
%7|1588085137.457|BROKERFAIL|rdkafka#producer-1| [thrd:dswgraylog-cp-kafka-2.dswgraylog-cp-kafka-headless.graylog-kafk]: dswgraylog-cp-kafka-2.dswgraylog-cp-kafka-headless.graylog-kafka:9092/2: failed: err: Local: Broker handle destroyed: (errno: Operation now in progress)
%7|1588085137.457|STATE|rdkafka#producer-1| [thrd:dswgraylog-cp-kafka-0.dswgraylog-cp-kafka-headless.graylog-kafk]: dswgraylog-cp-kafka-0.dswgraylog-cp-kafka-headless.graylog-kafka:9092/0: Broker changed state CONNECT -> DOWN
%7|1588085137.457|TERM|rdkafka#producer-1| [thrd:dswgraylog-cp-kafka-0.dswgraylog-cp-kafka-headless.graylog-kafk]: dswgraylog-cp-kafka-0.dswgraylog-cp-kafka-headless.graylog-kafka:9092/0: Received TERMINATE op in state DOWN: 1 refcnts, 0 toppar(s), 0 active toppar(s), 0 outbufs, 0 waitresps, 0 retrybufs
%7|1588085137.457|STATE|rdkafka#producer-1| [thrd:dswgraylog-cp-kafka-2.dswgraylog-cp-kafka-headless.graylog-kafk]: dswgraylog-cp-kafka-2.dswgraylog-cp-kafka-headless.graylog-kafka:9092/2: Broker changed state CONNECT -> DOWN
%7|1588085137.457|TERM|rdkafka#producer-1| [thrd:dswgraylog-cp-kafka-2.dswgraylog-cp-kafka-headless.graylog-kafk]: dswgraylog-cp-kafka-2.dswgraylog-cp-kafka-headless.graylog-kafka:9092/2: Received TERMINATE op in state DOWN: 1 refcnts, 0 toppar(s), 0 active toppar(s), 0 outbufs, 0 waitresps, 0 retrybufs
%7|1588085137.557|TERMINATE|rdkafka#producer-1| [thrd:kafka.dsw-dia-dataflow-877236.us-south.containers.appdomain.clo]: kafka.dsw-dia-dataflow-877236.us-south.containers.appdomain.cloud:19092/bootstrap: Handle is terminating: failed 0 request(s) in retry+outbuf
%7|1588085137.557|BROKERFAIL|rdkafka#producer-1| [thrd:kafka.dsw-dia-dataflow-877236.us-south.containers.appdomain.clo]: kafka.dsw-dia-dataflow-877236.us-south.containers.appdomain.cloud:19092/bootstrap: failed: err: Local: Broker handle destroyed: (errno: Resource temporarily unavailable)
%7|1588085137.557|STATE|rdkafka#producer-1| [thrd:kafka.dsw-dia-dataflow-877236.us-south.containers.appdomain.clo]: kafka.dsw-dia-dataflow-877236.us-south.containers.appdomain.cloud:19092/bootstrap: Broker changed state UP -> DOWN
%7|1588085137.557|TERM|rdkafka#producer-1| [thrd:kafka.dsw-dia-dataflow-877236.us-south.containers.appdomain.clo]: kafka.dsw-dia-dataflow-877236.us-south.containers.appdomain.cloud:19092/bootstrap: Received TERMINATE op in state DOWN: 1 refcnts, 0 toppar(s), 0 active toppar(s), 0 outbufs, 0 waitresps, 0 retrybufs
%7|1588085137.557|TERMINATE|rdkafka#producer-1| [thrd::0/internal]: :0/internal: Handle is terminating: failed 0 request(s) in retry+outbuf
%7|1588085137.557|BROKERFAIL|rdkafka#producer-1| [thrd::0/internal]: :0/internal: failed: err: Local: Broker handle destroyed: (errno: Success)
%7|1588085137.557|STATE|rdkafka#producer-1| [thrd::0/internal]: :0/internal: Broker changed state UP -> DOWN

root@esbuat:/root/ # kafkacat -b kafka.dsw-dia-dataflow-877236.us-south.containers.appdomain.cloud:19092 -t test -C
% ERROR: Local: Broker transport failure: dswgraylog-cp-kafka-0.dswgraylog-cp-kafka-headless.graylog-kafka:9092/0: Connect to ipv4#169.63.7.138:9092 failed: Connection timed out
% ERROR: Local: Broker transport failure: dswgraylog-cp-kafka-2.dswgraylog-cp-kafka-headless.graylog-kafka:9092/2: Connect to ipv4#169.60.39.246:9092 failed: Connection timed out
% ERROR: Local: Broker transport failure: dswgraylog-cp-kafka-1.dswgraylog-cp-kafka-headless.graylog-kafka:9092/1: Connect to ipv4#169.62.134.10:9092 failed: Connection timed out

any idea?

Levi
Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants