Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question] kafka: error decoding packet: message of length 1213486160 too large or too small #2289

Closed
leonj1 opened this issue Dec 8, 2019 · 3 comments
Labels

Comments

@leonj1
Copy link

leonj1 commented Dec 8, 2019

I'm trying to create a cluster with listeners of type "plain" no TLS or auth. Best I can tell from the (master) docs we just need to defined the type "plain". This cluster should also be externally available from the K8s cluster.

I have successfully defined a K8s Service and Ingress and can telnet to the external port. When connecting to the Kafka cluster via Kafka client there's an error kafka: error decoding packet: message of length 1213486160 too large or too small

I can get a Strimzi Kafka operator 0.11.3 working successfully, but its not externally available. This attempt is using v0.14.0. Any guidance would be appreciated. Thanks!

# Ansible templated example
---
apiVersion: kafka.strimzi.io/v1alpha1
kind: Kafka
metadata:
  name: "{{ kafka_cluster_name }}-{{ env }}"
  namespace: kafka
spec:
  kafka:
    version: 2.3.0
    replicas: 5
    listeners:
      plain: {}
    config:
      offsets.topic.replication.factor: 5
      transaction.state.log.replication.factor: 5
      transaction.state.log.min.isr: 2
      log.message.format.version: "2.1"
      config.Consumer.Fetch.Max = 40000000
    storage:
      type: persistent-claim
      size: 100Gi
    listeners:
      external:
        type: ingress
        configuration:
          bootstrap:
            host: "bootstrap.{{ kafka_cluster_short_name }}.{{ root_dns }}"
          brokers:
          - broker: 0
            host: "broker0.{{ kafka_cluster_short_name }}.{{ root_dns }}"
          - broker: 1
            host: "broker1.{{ kafka_cluster_short_name }}.{{ root_dns }}"
          - broker: 2
            host: "broker2.{{ kafka_cluster_short_name }}.{{ root_dns }}"
          - broker: 3
            host: "broker3.{{ kafka_cluster_short_name }}.{{ root_dns }}"
          - broker: 4
            host: "broker4.{{ kafka_cluster_short_name }}.{{ root_dns }}"
  zookeeper:
    replicas: 5
    storage:
      type: persistent-claim
      size: 100Gi
  entityOperator:
    topicOperator: {}
    userOperator: {}
@leonj1 leonj1 added the question label Dec 8, 2019
@scholzj
Copy link
Member

scholzj commented Dec 8, 2019

First of all, you can have only one listener field. So you have to move the plain and extenal listener configurations into one place. E.g. like this:

  kafka:
    version: 2.3.0
    replicas: 5
    listeners:
      plain: {}
      external:
        type: ingress
        configuration:
          bootstrap:
            host: "bootstrap.{{ kafka_cluster_short_name }}.{{ root_dns }}"
          brokers:
          - broker: 0
            host: "broker0.{{ kafka_cluster_short_name }}.{{ root_dns }}"
          - broker: 1
            host: "broker1.{{ kafka_cluster_short_name }}.{{ root_dns }}"
          - broker: 2
            host: "broker2.{{ kafka_cluster_short_name }}.{{ root_dns }}"
          - broker: 3
            host: "broker3.{{ kafka_cluster_short_name }}.{{ root_dns }}"
          - broker: 4
            host: "broker4.{{ kafka_cluster_short_name }}.{{ root_dns }}"
    config:
      offsets.topic.replication.factor: 5
      transaction.state.log.replication.factor: 5
      transaction.state.log.min.isr: 2
      log.message.format.version: "2.1"
      config.Consumer.Fetch.Max = 40000000
    storage:
      type: persistent-claim
      size: 100Gi

Also, the plain listener is without encryption, but it is available only from the inside of the cluster. You can connect to it using <cluster-name>-kafka-bootstrap:9092.

The external listener is accessbiel from outside. However, when using Ingress, we use SSL passthrough to get the non-HTTP data through the Ingress. So that means that this will always have to use TLS. If you want the external applications to connect without TLS, you could use for example loadbalancers or nodeports. There you can disable TLS by adding tls: false.

@scholzj
Copy link
Member

scholzj commented Apr 5, 2020

I will close this since there was no update for more than 10 days. If you have something more feel free to open a new issue, reopen this one or get in touch with us on Slack or mailing list.

@scholzj scholzj closed this as completed Apr 5, 2020
@lovasoa
Copy link

lovasoa commented Oct 15, 2020

This issue is the top google result for me for 1213486160. For other people that end up here :

  • 1213486160 is the decimal representation of 0x48 0x54 0x54 0x50 which is HTTP in ascii
  • What happened is probably this: A client expecting a binary payload (such as kafka) received an HTTP response instead. It may either be talking to the wrong host and port, or there is an HTTP proxy (such as kubernetes ingress) where there shouldn't be one.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants