Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error 500 Timeout #75

Closed
danielfnfaria opened this issue Mar 21, 2017 · 21 comments
Closed

Error 500 Timeout #75

danielfnfaria opened this issue Mar 21, 2017 · 21 comments

Comments

@danielfnfaria
Copy link

POST or GET in /connectors return 500 Timeout in distribuited mode.

[2017-03-21 21:26:04,794] INFO 127.0.0.1 - - [21/Mar/2017:21:24:34 +0000] "GET /connectors HTTP/1.1" 500 48 90235 (org.apache.kafka.connect.runtime.rest.RestServer:60)

@ewencp
Copy link
Contributor

ewencp commented Apr 5, 2017

@danielfnfaria Is there any more information that this? What is the actual output -- it looks like it wrote 48 bytes?

Can you check the connect logs to see if there are any relevant messages?

@ldcasillas-progreso
Copy link

I'm not experiencing @danielfnfaria's precise issues, but something I've noticed is that some erroneous requests to Kafka Connect distributed workers cause their internal REST endpoints to die, with no error returned to the caller or even logged at all.

The first example that I experienced was because I assumed that the Confluent Platform's uber-RPM included the S3 connector, when it turns out that it doesn't. Before I realized this, any attempt I made at registering an S3 sink timed out with a 500 error; and not just that, after such a timeout, all requests to the worker's REST interface would time out thereafter until the worker was restarted. Once I realized that the S3 connector jars were not actually there and installed that RPM separately, the registration request succeeded.

So basically, whatever problem @danielfnfaria is experiencing here, the bigger problem is that distributed workers swallow exceptions and die when you send them a "killer request."

@sailxjx
Copy link

sailxjx commented May 23, 2017

@ewencp Same problem here, without any exception, just 500 timeout when update or delete a connector. And when this happens, all the PUT/POST requests will not work.
[2017-05-23 11:14:12,700] INFO 10.10.4.1 - - [23/May/2017:03:12:05 +0000] "DELETE /connectors/mongo_cron_source_slave HTTP/1.1" 500 92 127182 (org.apache.kafka.connect.runtime.rest.RestServer)

@yogeshsangvikar
Copy link

I am getting same error for GET or POST /connectors API. I am using confluent-3.3.0 package.

2017-08-08 10:42:02 INFO RestServer:60 - 10.160.240.125 - - [08/Aug/2017:10:40:32 +0000] "GET /connectors HTTP/1.1" 500 48 90007
2017-08-08 10:42:34 INFO RestServer:60 - 10.160.240.125 - - [08/Aug/2017:10:41:03 +0000] "POST /connectors HTTP/1.1" 500 48 90124

Please help to resolve this error.

@yogeshsangvikar
Copy link

By downgrading confluent to 3.2.0 version, I am able to access /connectors API.

@hleb-albau
Copy link

Same problem

@hakamairi
Copy link

Same problem, any updates?

@sailxjx
Copy link

sailxjx commented Sep 19, 2017

I solve this problem by set the rest.advertised.host.name (with ip address) and rest.advertised.port, each connector process needs to have an unique host or port, and these hosts and ports should be accessible to every node of the cluster.
If you start a cluster with some nodes share the same host name and port, connectors will be blocked after receive the update/delete request.

@vultron81
Copy link

I too am having this issue on v3.3.0 of kafka-connect. The /connectors endpoint appears to be broken in this version.

@rhauch
Copy link
Member

rhauch commented Sep 29, 2017

#116 recently enhanced the connector to use exponential backoff. That was merged into the 3.3.x, 3.4.x, and master branches but has not yet been released. Feel free to build it to see if this fixes the issue -- would love to hear feedback.

@dex80526
Copy link

dex80526 commented Dec 1, 2017

any update on this issue? we are running to the same issue (get /connectors timeout)?

@neeraj2k6
Copy link

I am getting the same problem. were you able to solve it, seems some small config is missing :-(

@affair
Copy link

affair commented May 2, 2018

Getting the same issue.
I've noticed one very interesting thing. The following docker file works like a charm.

---
version: '2'
services:
  zookeeper-1:
    image: confluentinc/cp-zookeeper:latest
    #depends_on:
    #  - kibana
    environment:
      ZOOKEEPER_SERVER_ID: 1
      ZOOKEEPER_CLIENT_PORT: 22181
      ZOOKEEPER_TICK_TIME: 2000
      ZOOKEEPER_INIT_LIMIT: 5
      ZOOKEEPER_SYNC_LIMIT: 2
      ZOOKEEPER_SERVERS: 0.0.0.0:22888:23888;192.168.2.60:32888:33888;192.168.2.60:42888:43888
    volumes:
       - zoo1:/data
    networks:
      - esnet2
    ports:
       - "22181:22181"
       - "22888:22888"
       - "23888:23888"

  zookeeper-2:
    image: confluentinc/cp-zookeeper:latest
    #depends_on:
    #  - kibana
    environment:
      ZOOKEEPER_SERVER_ID: 2
      ZOOKEEPER_CLIENT_PORT: 32181
      ZOOKEEPER_TICK_TIME: 2000
      ZOOKEEPER_INIT_LIMIT: 5
      ZOOKEEPER_SYNC_LIMIT: 2
      ZOOKEEPER_SERVERS: 192.168.2.60:22888:23888;0.0.0.0:32888:33888;192.168.2.60:42888:43888
    volumes:
       - zoo2:/data
    networks:
      - esnet2
    ports:
       - "32181:32181"
       - "32888:32888"
       - "33888:33888"

  zookeeper-3:
    image: confluentinc/cp-zookeeper:latest
    #depends_on:
    #  - kibana
    environment:
      ZOOKEEPER_SERVER_ID: 3
      ZOOKEEPER_CLIENT_PORT: 42181
      ZOOKEEPER_TICK_TIME: 2000
      ZOOKEEPER_INIT_LIMIT: 5
      ZOOKEEPER_SYNC_LIMIT: 2
      ZOOKEEPER_SERVERS: 192.168.2.60:22888:23888;192.168.2.60:32888:33888;0.0.0.0:42888:43888
    volumes:
       - zoo3:/data
    networks:
      - esnet2
    ports:
       - "42181:42181"
       - "42888:42888"
       - "43888:43888"

  kafka-1:
    image: confluentinc/cp-kafka:latest
    depends_on:
      - zookeeper-1
      #- zookeeper-2
      #- zookeeper-3
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_NUM_PARTITIONS: 3
      KAFKA_DEFAULT_REPLICATION_FACTOR: 3
      KAFKA_DELETE_TOPIC_ENABLE: "true"
      KAFKA_LOG_RETENTION_HOURS: 8760 # 1 year
      KAFKA_ZOOKEEPER_CONNECT: 192.168.2.60:22181,192.168.2.60:32181,192.168.2.60:42181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.2.60:19092
    volumes:
       - kafka1:/var/lib/kafka/data
    networks:
      - esnet2
    ports:
       - "19092:19092"

  kafka-2:
    image: confluentinc/cp-kafka:latest
    depends_on:
      - zookeeper-1
      #- zookeeper-2
      #- zookeeper-3
    environment:
      KAFKA_BROKER_ID: 2
      KAFKA_NUM_PARTITIONS: 3
      KAFKA_DEFAULT_REPLICATION_FACTOR: 3
      KAFKA_DELETE_TOPIC_ENABLE: "true"
      KAFKA_LOG_RETENTION_HOURS: 8760 # 1 year
      KAFKA_ZOOKEEPER_CONNECT: 192.168.2.60:22181,192.168.2.60:32181,192.168.2.60:42181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.2.60:29092
    volumes:
       - kafka2:/var/lib/kafka/data
    networks:
      - esnet2
    ports:
       - "29092:29092"

  kafka-3:
    image: confluentinc/cp-kafka:latest
    depends_on:
      - zookeeper-1
      #- zookeeper-2
      #- zookeeper-3
    environment:
      KAFKA_BROKER_ID: 3
      KAFKA_NUM_PARTITIONS: 3
      KAFKA_DEFAULT_REPLICATION_FACTOR: 3
      KAFKA_DELETE_TOPIC_ENABLE: "true"
      KAFKA_LOG_RETENTION_HOURS: 8760 # 1 year
      KAFKA_ZOOKEEPER_CONNECT: 192.168.2.60:22181,192.168.2.60:32181,192.168.2.60:42181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.2.60:39092
    volumes:
       - kafka3:/var/lib/kafka/data
    networks:
      - esnet2
    ports:
       - "39092:39092"

  connect-1:
    image: confluentinc/cp-kafka-connect:latest
    ports:
      - 18083:18083
    depends_on:
      - kafka-1
      #- kafka-2
      #- kafka-3
    environment:
      CONNECT_BOOTSTRAP_SERVERS: 192.168.2.60:19092,192.168.2.60:29092,192.168.2.60:39092
      CONNECT_REST_PORT: 18083
      CONNECT_GROUP_ID: "connect"

      CONNECT_CONFIG_STORAGE_TOPIC: connect-config
      CONNECT_OFFSET_STORAGE_TOPIC: connect-offsets
      CONNECT_STATUS_STORAGE_TOPIC: connect-status

      CONNECT_REPLICATION_FACTOR: 3

      CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 3
      CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 3
      CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 3

      CONNECT_KEY_CONVERTER: "org.apache.kafka.connect.storage.StringConverter"
      CONNECT_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"

      CONNECT_KEY_CONVERTER_SCHEMAS_ENABLE: "false"
      CONNECT_VALUE_CONVERTER_SCHEMAS_ENABLE: "false"

      CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
      CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"

      CONNECT_INTERNAL_KEY_CONVERTER_SCHEMAS_ENABLE: "true"
      CONNECT_INTERNAL_VALUE_CONVERTER_SCHEMAS_ENABLE: "true"

      #CONNECT_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
      #CONNECT_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"

      CONNECT_REST_ADVERTISED_HOST_NAME: 192.168.2.60

      CONNECT_LOG4J_ROOT_LOGLEVEL: INFO
      CONNECT_LOG4J_LOGGERS: org.reflections=ERROR

      CONNECT_PLUGIN_PATH: /usr/share/java

  connect-2:
    image: confluentinc/cp-kafka-connect:latest
    ports:
      - 28083:28083
    depends_on:
      - kafka-1
      #- kafka-2
      #- kafka-3
    environment:
      CONNECT_BOOTSTRAP_SERVERS: 192.168.2.60:19092,192.168.2.60:29092,192.168.2.60:39092
      CONNECT_REST_PORT: 28083
      CONNECT_GROUP_ID: "connect"

      CONNECT_CONFIG_STORAGE_TOPIC: connect-config
      CONNECT_OFFSET_STORAGE_TOPIC: connect-offsets
      CONNECT_STATUS_STORAGE_TOPIC: connect-status

      CONNECT_REPLICATION_FACTOR: 3

      CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 3
      CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 3
      CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 3

      CONNECT_KEY_CONVERTER: "org.apache.kafka.connect.storage.StringConverter"
      CONNECT_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"

      CONNECT_KEY_CONVERTER_SCHEMAS_ENABLE: "false"
      CONNECT_VALUE_CONVERTER_SCHEMAS_ENABLE: "false"

      CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
      CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"

      CONNECT_INTERNAL_KEY_CONVERTER_SCHEMAS_ENABLE: "true"
      CONNECT_INTERNAL_VALUE_CONVERTER_SCHEMAS_ENABLE: "true"

      #CONNECT_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
      #CONNECT_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"

      CONNECT_REST_ADVERTISED_HOST_NAME: 192.168.2.60

      CONNECT_LOG4J_ROOT_LOGLEVEL: INFO
      CONNECT_LOG4J_LOGGERS: org.reflections=ERROR

      CONNECT_PLUGIN_PATH: /usr/share/java

  connect-3:
    image: confluentinc/cp-kafka-connect:latest
    ports:
      - 38083:38083
    depends_on:
      - kafka-1
      #- kafka-2
      #- kafka-3
    environment:
      CONNECT_BOOTSTRAP_SERVERS: 192.168.2.60:19092,192.168.2.60:29092,192.168.2.60:39092
      CONNECT_REST_PORT: 38083
      CONNECT_GROUP_ID: "connect"

      CONNECT_CONFIG_STORAGE_TOPIC: connect-config
      CONNECT_OFFSET_STORAGE_TOPIC: connect-offsets
      CONNECT_STATUS_STORAGE_TOPIC: connect-status

      CONNECT_REPLICATION_FACTOR: 3

      CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 3
      CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 3
      CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 3

      CONNECT_KEY_CONVERTER: "org.apache.kafka.connect.storage.StringConverter"
      CONNECT_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"

      CONNECT_KEY_CONVERTER_SCHEMAS_ENABLE: "false"
      CONNECT_VALUE_CONVERTER_SCHEMAS_ENABLE: "false"

      CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
      CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"

      CONNECT_INTERNAL_KEY_CONVERTER_SCHEMAS_ENABLE: "true"
      CONNECT_INTERNAL_VALUE_CONVERTER_SCHEMAS_ENABLE: "true"

      #CONNECT_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
      #CONNECT_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"

      CONNECT_REST_ADVERTISED_HOST_NAME: 192.168.2.60

      CONNECT_LOG4J_ROOT_LOGLEVEL: INFO
      CONNECT_LOG4J_LOGGERS: org.reflections=ERROR

      CONNECT_PLUGIN_PATH: /usr/share/java

volumes:
  zoo1:
    driver: local
  zoo2:
    driver: local
  zoo3:
    driver: local
  kafka1:
    driver: local
  kafka2:
    driver: local
  kafka3:
    driver: local

networks:
  esnet2:
    driver: bridge

But when I start this one

---
version: '2'
services:
  zookeeper-1:
    image: confluentinc/cp-zookeeper:latest
    #depends_on:
    #  - kibana
    environment:
      ZOOKEEPER_SERVER_ID: 1
      ZOOKEEPER_CLIENT_PORT: 22181
      ZOOKEEPER_TICK_TIME: 2000
      #ZOOKEEPER_INIT_LIMIT: 5
      #ZOOKEEPER_SYNC_LIMIT: 2
      #ZOOKEEPER_SERVERS: 0.0.0.0:22888:23888;192.168.2.60:32888:33888;192.168.2.60:42888:43888
    volumes:
       - zoo1:/data
    networks:
      - esnet2
    ports:
       - "22181:22181"
       - "22888:22888"
       - "23888:23888"

  kafka-1:
    image: confluentinc/cp-kafka:latest
    depends_on:
      - zookeeper-1
      #- zookeeper-2
      #- zookeeper-3
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_NUM_PARTITIONS: 1
      KAFKA_DEFAULT_REPLICATION_FACTOR: 1
      KAFKA_DELETE_TOPIC_ENABLE: "true"
      KAFKA_LOG_RETENTION_HOURS: 8760 # 1 year
      KAFKA_ZOOKEEPER_CONNECT: 192.168.2.60:22181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.2.60:19092
    volumes:
       - kafka1:/var/lib/kafka/data
    networks:
      - esnet2
    ports:
       - "19092:19092"

  connect-1:
    image: confluentinc/cp-kafka-connect:latest
    ports:
      - 18083:18083
    depends_on:
      - kafka-1
      #- kafka-2
      #- kafka-3
    environment:
      CONNECT_BOOTSTRAP_SERVERS: 192.168.2.60:19092
      CONNECT_REST_PORT: 18083
      CONNECT_GROUP_ID: "connect"

      CONNECT_CONFIG_STORAGE_TOPIC: connect-config
      CONNECT_OFFSET_STORAGE_TOPIC: connect-offsets
      CONNECT_STATUS_STORAGE_TOPIC: connect-status

      CONNECT_REPLICATION_FACTOR: 1

      CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1

      CONNECT_KEY_CONVERTER: "org.apache.kafka.connect.storage.StringConverter"
      CONNECT_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"

      CONNECT_KEY_CONVERTER_SCHEMAS_ENABLE: "false"
      CONNECT_VALUE_CONVERTER_SCHEMAS_ENABLE: "false"

      CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
      CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"

      CONNECT_INTERNAL_KEY_CONVERTER_SCHEMAS_ENABLE: "true"
      CONNECT_INTERNAL_VALUE_CONVERTER_SCHEMAS_ENABLE: "true"

      #CONNECT_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
      #CONNECT_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"

      CONNECT_REST_ADVERTISED_HOST_NAME: 192.168.2.60
      CONNECT_REST_ADVERTISED_PORT: 18083

      CONNECT_LOG4J_ROOT_LOGLEVEL: INFO
      CONNECT_LOG4J_LOGGERS: org.reflections=INFO

      CONNECT_PLUGIN_PATH: /usr/share/java

volumes:
  esdata1:
    driver: local
  zoo1:
    driver: local
  kafka1:
    driver: local

networks:
  esnet2:
    driver: bridge

I can't request even list of connectors GET /connectors, i'm getting 500 Request time out.
I don't know why it works for cluster-mode connect.

@affair
Copy link

affair commented May 3, 2018

That was my stupid error.
https://docs.confluent.io/current/installation/docker/docs/configuration.html#confluent-kafka-cp-kafka

By default replication factor is 3. I fixed my problem by setting KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1 in my docker-compose.yml.
So keep in mind, if kafka has not been successfully started, kafka-connector will respond with 500 error code because it stores data in kafka topics.

@gaisanshi
Copy link

Looks like this issue happened for a while for some situations. I am using confluent version 4.0.1 distribute mode, I can reproduce this issue. For my situation, I have one JdbcSourceConnector and one RedshiftSinkConnector. The first deploy or deletion REST work for either connector, but all the following REST call will hang. I went through this thread http://mail-archives.apache.org/mod_mbox/kafka-users/201612.mbox/%3CC5AB03B2-8CB5-4258-82B3-1E105D52F567@trulia.com%3E, also confluentinc/kafka-connect-jdbc#302. but these don't help my situation. Does anyone have a suggestion?

@gaisanshi
Copy link

I got my issue solved. For my case, the problem is that we are using "timestamp+incrementing" mode, but the source is huge table without index on the timestamp column, so after the source connector is created, it start to query the DB and wait for the result until timeout. And then it runs the query again and again. During the query time, rest api reports "500: timeout" for any new connector deployment(I don't know how connector handle that logic internally). But when I change to anther table with index built on. It works. Not sure if there are some connector monitor can be used to detect this corner case. But definitely, query timeout should not bring down the rest api.

@rupeshpatel02
Copy link

I am also facing time out error while posting the source connector for DB2,POST API wait for almost 90 second and after that it time out with below error

[2019-06-02 00:17:17,906] INFO 192.168.1.2 - - [01/Jun/2019:18:45:47 +0000] "POST /connectors HTTP/1.1" 500 48 90004 (org.apache.kafka.connect.runtime.rest.RestServer:60)

I can see below warning also in Kafka connect log just before the time out error

This member will leave the group because consumer poll timeout has expired. This means the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time processing messages. You can address this either by increasing max.poll.interval.ms or by reducing the maximum size of batches returned in poll() with max.poll.records. (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:1011)

is there any configuration to increase the API time out. I have also noticed interesting behavior, when i run the kafka connect in standalone mode it works perfectly. I can see the DB2 table data in kafka topic.

@Bharath1796
Copy link

Bharath1796 commented Aug 5, 2019

I am also facing time out error while posting the source connector for DB2,POST API wait for almost 90 second and after that it time out with below error

[2019-06-02 00:17:17,906] INFO 192.168.1.2 - - [01/Jun/2019:18:45:47 +0000] "POST /connectors HTTP/1.1" 500 48 90004 (org.apache.kafka.connect.runtime.rest.RestServer:60)

I can see below warning also in Kafka connect log just before the time out error

This member will leave the group because consumer poll timeout has expired. This means the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time processing messages. You can address this either by increasing max.poll.interval.ms or by reducing the maximum size of batches returned in poll() with max.poll.records. (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:1011)

is there any configuration to increase the API time out. I have also noticed interesting behavior, when i run the kafka connect in standalone mode it works perfectly. I can see the DB2 table data in kafka topic.

Hi

Do you have any solution for this. Im also facing the exact issue when loading a source connector in distributed mode. Please kindly reply if anybody has any solutions for this.

@M3lkior
Copy link

M3lkior commented Jan 22, 2020

Facing the same issue here with kafka-connect-sftp source connector :(

@levzem
Copy link
Contributor

levzem commented Feb 7, 2020

Closing this as the original issue has been resolved. Follow up commentary pertains to other connectors.

@levzem levzem closed this as completed Feb 7, 2020
@hongbo-miao
Copy link

I got a similar issue. I posted my solution at https://stackoverflow.com/questions/71520181/got-500-request-timed-out-for-kafka-connect-rest-api-post-put-delete

To me, simply restart the Kafka Connect, the issue will be gone for me.

kubectl rollout restart deployment my-kafka-connect --namespace=my-kafka

So far, the timeout issue hasn't showed up again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests