Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bitnami/kafka] Unexpected Kafka request of type METADATA during SASL handshake. #25129

Closed
zjf88865 opened this issue Apr 11, 2024 · 14 comments
Closed
Assignees
Labels
kafka solved stale 15 days without activity tech-issues The user has a technical issue about an application

Comments

@zjf88865
Copy link

zjf88865 commented Apr 11, 2024

Name and Version

bitnami/kafka 3.7.0

What architecture are you using?

amd64

What steps will reproduce the bug?

  1. First, update bitnami to the latest repository speckled, and execute the following command
    helm repo update
  2. Create a kafka cluster through help, with the following command
helm install kafka bitnami/kafka --namespace pre-logging \
    --set kafkaVersion=3.7.0 \
    --set replicaCount=3 \
    --set kafka.persistence.enabled=false \
    --set kafka.kafkaConfigOverrides=transaction.state.log.replication.factor=3 \
    --set kafka.kafkaConfigOverrides=transaction.state.log.min.isr=2 \
    --set kafka.kafkaConfigOverrides=default.replication.factor=3 \
    --set kafka.kafkaConfigOverrides=num.io.threads=8 \
    --set kafka.kafkaConfigOverrides=num.network.threads=3 \
    --set kafka.kafkaConfigOverrides=log.message.format.version=3.7 \
    --set kafka.kafkaConfigOverrides=inter.broker.protocol.version=3.7 \
    --set kafka.kafkaConfigOverrides=offsets.topic.replication.factor=3 \
    --set kafka.kafkaConfigOverrides=transaction.state.log.num.partitions=50 \
    --set-string labels.app.kubernetes.io/managed-by=Helm \
    --set-string labels.meta.helm.sh/release-name=kafka \
    --set-string labels.meta.helm.sh/release-namespace=pre-logging \
    --set broker.persistence.enabled=false \
    --set controller.persistence.enabled=false \
    --set broker.logPersistence.enabled=false \
    --set kraft.enabled=true \
    --set listeners.client.protocol=SASL_PLAINTEXT \
    --set listeners.controller.protocol=SASL_PLAINTEXT \
    --set listeners.interbroker.protocol=SASL_PLAINTEXT \
    --set listeners.external.protocol=SASL_PLAINTEXT \
    --set sasl.interbroker.user=inter_broker_user \
    --set sasl.interbroker.password=as123456

4.View Running Containers

[root@master ~]# kubectl get pods -n pre-logging                      
NAME                                   READY   STATUS    RESTARTS       AGE
kafka-controller-0                     1/1     Running   0              57m
kafka-controller-1                     1/1     Running   0              57m
kafka-controller-2                     1/1     Running   0              57m
logstash-deployment-7b57c48d48-t8bsj   1/1     Running   1 (219d ago)   297d

5.Viewing container logs, this process takes a few minutes to reproduce after the Kafka container is started. I will ignore any extra logs because there are too many of them

[root@master ~]# kubectl get log -n pre-logging kafka-controller-0
........
[2024-04-11 08:19:13,760] INFO [GroupCoordinator 0]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
[2024-04-11 08:19:13,762] INFO [TransactionCoordinator id=0] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
[2024-04-11 08:19:13,765] INFO [TxnMarkerSenderThread-0]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2024-04-11 08:19:13,765] INFO [TransactionCoordinator id=0] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
[2024-04-11 08:19:13,766] INFO [BrokerMetadataPublisher id=0] Updating metadata.version to 19 at offset OffsetAndEpoch(offset=6, epoch=1). (kafka.server.metadata.BrokerMetadataPublisher)
[2024-04-11 08:19:13,770] INFO [BrokerMetadataPublisher id=0] Resending BrokerRegistration with existing incarnation-id to inform the controller about log directories in the broker following metadata update: previousMetadataVersion: 3.0-IV1 newMetadataVersion: 3.7-IV4 (kafka.server.metadata.BrokerMetadataPublisher)
[2024-04-11 08:19:13,855] INFO [QuorumController id=0] Replayed RegisterControllerRecord contaning ControllerRegistration(id=0, incarnationId=Q9FPUYRmQYmVHZRH5NAWQQ, zkMigrationReady=false, listeners=[Endpoint(listenerName='CONTROLLER', securityProtocol=SASL_PLAINTEXT, host='kafka-controller-0.kafka-controller-headless.pre-logging.svc.cluster.local', port=9093)], supportedFeatures={metadata.version: 1-19}). (org.apache.kafka.controller.ClusterControlManager)
[2024-04-11 08:19:13,856] INFO [QuorumController id=0] Replayed RegisterControllerRecord contaning ControllerRegistration(id=2, incarnationId=S5U6CW2yT8uQOJKdflgLrA, zkMigrationReady=false, listeners=[Endpoint(listenerName='CONTROLLER', securityProtocol=SASL_PLAINTEXT, host='kafka-controller-2.kafka-controller-headless.pre-logging.svc.cluster.local', port=9093)], supportedFeatures={metadata.version: 1-19}). (org.apache.kafka.controller.ClusterControlManager)
[2024-04-11 08:19:13,860] INFO [QuorumController id=0] Replayed RegisterControllerRecord contaning ControllerRegistration(id=1, incarnationId=Lmsv1gwLRcWbz62i7GcnNA, zkMigrationReady=false, listeners=[Endpoint(listenerName='CONTROLLER', securityProtocol=SASL_PLAINTEXT, host='kafka-controller-1.kafka-controller-headless.pre-logging.svc.cluster.local', port=9093)], supportedFeatures={metadata.version: 1-19}). (org.apache.kafka.controller.ClusterControlManager)
[2024-04-11 08:19:13,861] INFO [QuorumController id=0] Replayed RegisterBrokerRecord modifying the registration for broker 1: RegisterBrokerRecord(brokerId=1, isMigratingZkBroker=false, incarnationId=K0MtJm_bSfCD7akkrR7H4A, brokerEpoch=14, endPoints=[BrokerEndpoint(name='INTERNAL', host='kafka-controller-1.kafka-controller-headless.pre-logging.svc.cluster.local', port=9094, securityProtocol=2), BrokerEndpoint(name='CLIENT', host='kafka-controller-1.kafka-controller-headless.pre-logging.svc.cluster.local', port=9092, securityProtocol=2)], features=[BrokerFeature(name='metadata.version', minSupportedVersion=1, maxSupportedVersion=19)], rack=null, fenced=true, inControlledShutdown=false, logDirs=[Lo60YyaR5MP2JquhVxC5EA]) (org.apache.kafka.controller.ClusterControlManager)
[2024-04-11 08:19:13,872] INFO [ControllerRegistrationManager id=0 incarnation=Q9FPUYRmQYmVHZRH5NAWQQ] Our registration has been persisted to the metadata log. (kafka.server.ControllerRegistrationManager)
[2024-04-11 08:19:13,958] INFO [QuorumController id=0] Replayed RegisterBrokerRecord modifying the registration for broker 2: RegisterBrokerRecord(brokerId=2, isMigratingZkBroker=false, incarnationId=Ur4dgjD8TqSowCXiSyfkVw, brokerEpoch=15, endPoints=[BrokerEndpoint(name='INTERNAL', host='kafka-controller-2.kafka-controller-headless.pre-logging.svc.cluster.local', port=9094, securityProtocol=2), BrokerEndpoint(name='CLIENT', host='kafka-controller-2.kafka-controller-headless.pre-logging.svc.cluster.local', port=9092, securityProtocol=2)], features=[BrokerFeature(name='metadata.version', minSupportedVersion=1, maxSupportedVersion=19)], rack=null, fenced=true, inControlledShutdown=false, logDirs=[f6SPOBiGmbaLo8jkFB9hyQ]) (org.apache.kafka.controller.ClusterControlManager)
[2024-04-11 08:19:13,963] INFO [BrokerLifecycleManager id=0] Successfully registered broker 0 with broker epoch 17 (kafka.server.BrokerLifecycleManager)
[2024-04-11 08:19:14,008] INFO [QuorumController id=0] Replayed RegisterBrokerRecord modifying the registration for broker 0: RegisterBrokerRecord(brokerId=0, isMigratingZkBroker=false, incarnationId=KjyG_H2QSZq_DFBQ-fIiHg, brokerEpoch=17, endPoints=[BrokerEndpoint(name='INTERNAL', host='kafka-controller-0.kafka-controller-headless.pre-logging.svc.cluster.local', port=9094, securityProtocol=2), BrokerEndpoint(name='CLIENT', host='kafka-controller-0.kafka-controller-headless.pre-logging.svc.cluster.local', port=9092, securityProtocol=2)], features=[BrokerFeature(name='metadata.version', minSupportedVersion=1, maxSupportedVersion=19)], rack=null, fenced=true, inControlledShutdown=false, logDirs=[jhKXB63X8VWnOzlEt3r_Rw]) (org.apache.kafka.controller.ClusterControlManager)
[2024-04-11 08:19:14,012] INFO [BrokerLifecycleManager id=0] Successfully registered broker 0 with broker epoch 19 (kafka.server.BrokerLifecycleManager)
[2024-04-11 08:19:14,053] WARN [BrokerLifecycleManager id=0] Broker 0 sent a heartbeat request but received error STALE_BROKER_EPOCH. (kafka.server.BrokerLifecycleManager)
[2024-04-11 08:19:14,112] INFO [QuorumController id=0] Replayed BrokerRegistrationChangeRecord modifying the registration for broker 1: BrokerRegistrationChangeRecord(brokerId=1, brokerEpoch=14, fenced=-1, inControlledShutdown=0, logDirs=[]) (org.apache.kafka.controller.ClusterControlManager)
[2024-04-11 08:19:14,113] INFO [QuorumController id=0] Replayed RegisterBrokerRecord modifying the registration for broker 0: RegisterBrokerRecord(brokerId=0, isMigratingZkBroker=false, incarnationId=KjyG_H2QSZq_DFBQ-fIiHg, brokerEpoch=19, endPoints=[BrokerEndpoint(name='INTERNAL', host='kafka-controller-0.kafka-controller-headless.pre-logging.svc.cluster.local', port=9094, securityProtocol=2), BrokerEndpoint(name='CLIENT', host='kafka-controller-0.kafka-controller-headless.pre-logging.svc.cluster.local', port=9092, securityProtocol=2)], features=[BrokerFeature(name='metadata.version', minSupportedVersion=1, maxSupportedVersion=19)], rack=null, fenced=true, inControlledShutdown=false, logDirs=[jhKXB63X8VWnOzlEt3r_Rw]) (org.apache.kafka.controller.ClusterControlManager)
[2024-04-11 08:19:14,152] INFO [BrokerLifecycleManager id=0] The broker has caught up. Transitioning from STARTING to RECOVERY. (kafka.server.BrokerLifecycleManager)
[2024-04-11 08:19:14,152] INFO [BrokerServer id=0] Finished waiting for the controller to acknowledge that we are caught up (kafka.server.BrokerServer)
[2024-04-11 08:19:14,153] INFO [BrokerServer id=0] Waiting for the initial broker metadata update to be published (kafka.server.BrokerServer)
[2024-04-11 08:19:14,153] INFO [BrokerServer id=0] Finished waiting for the initial broker metadata update to be published (kafka.server.BrokerServer)
[2024-04-11 08:19:14,156] INFO KafkaConfig values: 
        advertised.listeners = CLIENT://kafka-controller-0.kafka-controller-headless.pre-logging.svc.cluster.local:9092,INTERNAL://kafka-controller-0.kafka-controller-headless.pre-logging.svc.cluster.local:9094
        alter.config.policy.class.name = null
        alter.log.dirs.replication.quota.window.num = 11
        alter.log.dirs.replication.quota.window.size.seconds = 1
        authorizer.class.name = 
        auto.create.topics.enable = true
        auto.include.jmx.reporter = true
        auto.leader.rebalance.enable = true
        background.threads = 10
        broker.heartbeat.interval.ms = 2000
        broker.id = 0
        broker.id.generation.enable = true
        broker.rack = null
        broker.session.timeout.ms = 9000
        client.quota.callback.class = null
        compression.type = producer
        connection.failed.authentication.delay.ms = 100
        connections.max.idle.ms = 600000
        connections.max.reauth.ms = 0
        control.plane.listener.name = null
        controlled.shutdown.enable = true
        controlled.shutdown.max.retries = 3
        controlled.shutdown.retry.backoff.ms = 5000
        controller.listener.names = CONTROLLER
        controller.quorum.append.linger.ms = 25
        controller.quorum.election.backoff.max.ms = 1000
        controller.quorum.election.timeout.ms = 1000
        controller.quorum.fetch.timeout.ms = 2000
        controller.quorum.request.timeout.ms = 2000
        controller.quorum.retry.backoff.ms = 20
        controller.quorum.voters = [0@kafka-controller-0.kafka-controller-headless.pre-logging.svc.cluster.local:9093, 1@kafka-controller-1.kafka-controller-headless.pre-logging.svc.cluster.local:9093, 2@kafka-controller-2.kafka-controller-headless.pre-logging.svc.cluster.local:9093]
        controller.quota.window.num = 11
        controller.quota.window.size.seconds = 1
        controller.socket.timeout.ms = 30000
        create.topic.policy.class.name = null
        default.replication.factor = 1
        delegation.token.expiry.check.interval.ms = 3600000
        delegation.token.expiry.time.ms = 86400000
        delegation.token.master.key = null
        delegation.token.max.lifetime.ms = 604800000
        delegation.token.secret.key = null
        delete.records.purgatory.purge.interval.requests = 1
        delete.topic.enable = true
        early.start.listeners = null
        eligible.leader.replicas.enable = false
        fetch.max.bytes = 57671680
        fetch.purgatory.purge.interval.requests = 1000
        group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.UniformAssignor, org.apache.kafka.coordinator.group.assignor.RangeAssignor]
        group.consumer.heartbeat.interval.ms = 5000
        group.consumer.max.heartbeat.interval.ms = 15000
        group.consumer.max.session.timeout.ms = 60000
        group.consumer.max.size = 2147483647
        group.consumer.min.heartbeat.interval.ms = 5000
        group.consumer.min.session.timeout.ms = 45000
        group.consumer.session.timeout.ms = 45000
        group.coordinator.new.enable = false
        group.coordinator.rebalance.protocols = [classic]
        group.coordinator.threads = 1
        group.initial.rebalance.delay.ms = 3000
        group.max.session.timeout.ms = 1800000
        group.max.size = 2147483647
        group.min.session.timeout.ms = 6000
        initial.broker.registration.timeout.ms = 60000
        inter.broker.listener.name = INTERNAL
        inter.broker.protocol.version = 3.7-IV4
        kafka.metrics.polling.interval.secs = 10
        kafka.metrics.reporters = []
        leader.imbalance.check.interval.seconds = 300
        leader.imbalance.per.broker.percentage = 10
        listener.security.protocol.map = CLIENT:SASL_PLAINTEXT,INTERNAL:SASL_PLAINTEXT,CONTROLLER:SASL_PLAINTEXT
        listeners = CLIENT://:9092,INTERNAL://:9094,CONTROLLER://:9093
        log.cleaner.backoff.ms = 15000
        log.cleaner.dedupe.buffer.size = 134217728
        log.cleaner.delete.retention.ms = 86400000
        log.cleaner.enable = true
        log.cleaner.io.buffer.load.factor = 0.9
        log.cleaner.io.buffer.size = 524288
        log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
        log.cleaner.max.compaction.lag.ms = 9223372036854775807
        log.cleaner.min.cleanable.ratio = 0.5
        log.cleaner.min.compaction.lag.ms = 0
        log.cleaner.threads = 1
        log.cleanup.policy = [delete]
        log.dir = /bitnami/kafka/data
        log.dirs = null
        log.flush.interval.messages = 9223372036854775807
        log.flush.interval.ms = null
        log.flush.offset.checkpoint.interval.ms = 60000
        log.flush.scheduler.interval.ms = 9223372036854775807
        log.flush.start.offset.checkpoint.interval.ms = 60000
        log.index.interval.bytes = 4096
        log.index.size.max.bytes = 10485760
        log.local.retention.bytes = -2
        log.local.retention.ms = -2
        log.message.downconversion.enable = true
        log.message.format.version = 3.0-IV1
        log.message.timestamp.after.max.ms = 9223372036854775807
        log.message.timestamp.before.max.ms = 9223372036854775807
        log.message.timestamp.difference.max.ms = 9223372036854775807
        log.message.timestamp.type = CreateTime
        log.preallocate = false
        log.retention.bytes = -1
        log.retention.check.interval.ms = 300000
        log.retention.hours = 168
        log.retention.minutes = null
        log.retention.ms = null
        log.roll.hours = 168
        log.roll.jitter.hours = 0
        log.roll.jitter.ms = null
        log.roll.ms = null
        log.segment.bytes = 1073741824
        log.segment.delete.delay.ms = 60000
        max.connection.creation.rate = 2147483647
        max.connections = 2147483647
        max.connections.per.ip = 2147483647
        max.connections.per.ip.overrides = 
        max.incremental.fetch.session.cache.slots = 1000
        message.max.bytes = 1048588
        metadata.log.dir = null
        metadata.log.max.record.bytes.between.snapshots = 20971520
        metadata.log.max.snapshot.interval.ms = 3600000
        metadata.log.segment.bytes = 1073741824
        metadata.log.segment.min.bytes = 8388608
        metadata.log.segment.ms = 604800000
        metadata.max.idle.interval.ms = 500
        metadata.max.retention.bytes = 104857600
        metadata.max.retention.ms = 604800000
        metric.reporters = []
        metrics.num.samples = 2
        metrics.recording.level = INFO
        metrics.sample.window.ms = 30000
        min.insync.replicas = 1
        node.id = 0
        num.io.threads = 8
        num.network.threads = 3
        num.partitions = 1
        num.recovery.threads.per.data.dir = 1
        num.replica.alter.log.dirs.threads = null
        num.replica.fetchers = 1
        offset.metadata.max.bytes = 4096
        offsets.commit.required.acks = -1
        offsets.commit.timeout.ms = 5000
        offsets.load.buffer.size = 5242880
        offsets.retention.check.interval.ms = 600000
        offsets.retention.minutes = 10080
        offsets.topic.compression.codec = 0
        offsets.topic.num.partitions = 50
        offsets.topic.replication.factor = 3
        offsets.topic.segment.bytes = 104857600
        password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
        password.encoder.iterations = 4096
        password.encoder.key.length = 128
        password.encoder.keyfactory.algorithm = null
        password.encoder.old.secret = null
        password.encoder.secret = null
        principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder
        process.roles = [controller, broker]
        producer.id.expiration.check.interval.ms = 600000
        producer.id.expiration.ms = 86400000
        producer.purgatory.purge.interval.requests = 1000
        queued.max.request.bytes = -1
        queued.max.requests = 500
        quota.window.num = 11
        quota.window.size.seconds = 1
        remote.log.index.file.cache.total.size.bytes = 1073741824
        remote.log.manager.task.interval.ms = 30000
        remote.log.manager.task.retry.backoff.max.ms = 30000
        remote.log.manager.task.retry.backoff.ms = 500
        remote.log.manager.task.retry.jitter = 0.2
        remote.log.manager.thread.pool.size = 10
        remote.log.metadata.custom.metadata.max.bytes = 128
        remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager
        remote.log.metadata.manager.class.path = null
        remote.log.metadata.manager.impl.prefix = rlmm.config.
        remote.log.metadata.manager.listener.name = null
        remote.log.reader.max.pending.tasks = 100
        remote.log.reader.threads = 10
        remote.log.storage.manager.class.name = null
        remote.log.storage.manager.class.path = null
        remote.log.storage.manager.impl.prefix = rsm.config.
        remote.log.storage.system.enable = false
        replica.fetch.backoff.ms = 1000
        replica.fetch.max.bytes = 1048576
        replica.fetch.min.bytes = 1
        replica.fetch.response.max.bytes = 10485760
        replica.fetch.wait.max.ms = 500
        replica.high.watermark.checkpoint.interval.ms = 5000
        replica.lag.time.max.ms = 30000
        replica.selector.class = null
        replica.socket.receive.buffer.bytes = 65536
        replica.socket.timeout.ms = 30000
        replication.quota.window.num = 11
        replication.quota.window.size.seconds = 1
        request.timeout.ms = 30000
        reserved.broker.max.id = 1000
        sasl.client.callback.handler.class = null
        sasl.enabled.mechanisms = [PLAIN, SCRAM-SHA-256, SCRAM-SHA-512]
        sasl.jaas.config = null
        sasl.kerberos.kinit.cmd = /usr/bin/kinit
        sasl.kerberos.min.time.before.relogin = 60000
        sasl.kerberos.principal.to.local.rules = [DEFAULT]
        sasl.kerberos.service.name = null
        sasl.kerberos.ticket.renew.jitter = 0.05
        sasl.kerberos.ticket.renew.window.factor = 0.8
        sasl.login.callback.handler.class = null
        sasl.login.class = null
        sasl.login.connect.timeout.ms = null
        sasl.login.read.timeout.ms = null
        sasl.login.refresh.buffer.seconds = 300
        sasl.login.refresh.min.period.seconds = 60
        sasl.login.refresh.window.factor = 0.8
        sasl.login.refresh.window.jitter = 0.05
        sasl.login.retry.backoff.max.ms = 10000
        sasl.login.retry.backoff.ms = 100
        sasl.mechanism.controller.protocol = PLAIN
        sasl.mechanism.inter.broker.protocol = PLAIN
        sasl.oauthbearer.clock.skew.seconds = 30
        sasl.oauthbearer.expected.audience = null
        sasl.oauthbearer.expected.issuer = null
        sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
        sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
        sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
        sasl.oauthbearer.jwks.endpoint.url = null
        sasl.oauthbearer.scope.claim.name = scope
        sasl.oauthbearer.sub.claim.name = sub
        sasl.oauthbearer.token.endpoint.url = null
        sasl.server.callback.handler.class = null
        sasl.server.max.receive.size = 524288
        security.inter.broker.protocol = PLAINTEXT
        security.providers = null
        server.max.startup.time.ms = 9223372036854775807
        socket.connection.setup.timeout.max.ms = 30000
        socket.connection.setup.timeout.ms = 10000
        socket.listen.backlog.size = 50
        socket.receive.buffer.bytes = 102400
        socket.request.max.bytes = 104857600
        socket.send.buffer.bytes = 102400
        ssl.allow.dn.changes = false
        ssl.allow.san.changes = false
        ssl.cipher.suites = []
        ssl.client.auth = none
        ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
        ssl.endpoint.identification.algorithm = https
        ssl.engine.factory.class = null
        ssl.key.password = null
        ssl.keymanager.algorithm = SunX509
        ssl.keystore.certificate.chain = null
        ssl.keystore.key = null
        ssl.keystore.location = null
        ssl.keystore.password = null
        ssl.keystore.type = JKS
        ssl.principal.mapping.rules = DEFAULT
        ssl.protocol = TLSv1.3
        ssl.provider = null
        ssl.secure.random.implementation = null
        ssl.trustmanager.algorithm = PKIX
        ssl.truststore.certificates = null
        ssl.truststore.location = null
        ssl.truststore.password = null
        ssl.truststore.type = JKS
        telemetry.max.bytes = 1048576
        transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000
        transaction.max.timeout.ms = 900000
        transaction.partition.verification.enable = true
        transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
        transaction.state.log.load.buffer.size = 5242880
        transaction.state.log.min.isr = 2
        transaction.state.log.num.partitions = 50
        transaction.state.log.replication.factor = 3
        transaction.state.log.segment.bytes = 104857600
        transactional.id.expiration.ms = 604800000
        unclean.leader.election.enable = false
        unstable.api.versions.enable = false
        unstable.metadata.versions.enable = false
        zookeeper.clientCnxnSocket = null
        zookeeper.connect = null
        zookeeper.connection.timeout.ms = null
        zookeeper.max.in.flight.requests = 10
        zookeeper.metadata.migration.enable = false
        zookeeper.metadata.migration.min.batch.size = 200
        zookeeper.session.timeout.ms = 18000
        zookeeper.set.acl = false
        zookeeper.ssl.cipher.suites = null
        zookeeper.ssl.client.enable = false
        zookeeper.ssl.crl.enable = false
        zookeeper.ssl.enabled.protocols = null
        zookeeper.ssl.endpoint.identification.algorithm = HTTPS
        zookeeper.ssl.keystore.location = null
        zookeeper.ssl.keystore.password = null
        zookeeper.ssl.keystore.type = null
        zookeeper.ssl.ocsp.enable = false
        zookeeper.ssl.protocol = TLSv1.2
        zookeeper.ssl.truststore.location = null
        zookeeper.ssl.truststore.password = null
        zookeeper.ssl.truststore.type = null
 (kafka.server.KafkaConfig)
[2024-04-11 08:19:14,167] INFO [BrokerServer id=0] Waiting for the broker to be unfenced (kafka.server.BrokerServer)
[2024-04-11 08:19:14,168] INFO [BrokerLifecycleManager id=0] The broker is in RECOVERY. (kafka.server.BrokerLifecycleManager)
[2024-04-11 08:19:14,204] INFO [QuorumController id=0] Replayed BrokerRegistrationChangeRecord modifying the registration for broker 2: BrokerRegistrationChangeRecord(brokerId=2, brokerEpoch=15, fenced=-1, inControlledShutdown=0, logDirs=[]) (org.apache.kafka.controller.ClusterControlManager)
[2024-04-11 08:19:14,209] INFO [BrokerLifecycleManager id=0] The broker has been unfenced. Transitioning from RECOVERY to RUNNING. (kafka.server.BrokerLifecycleManager)
[2024-04-11 08:19:14,209] INFO [BrokerServer id=0] Finished waiting for the broker to be unfenced (kafka.server.BrokerServer)
[2024-04-11 08:19:14,210] INFO authorizerStart completed for endpoint CLIENT. Endpoint is now READY. (org.apache.kafka.server.network.EndpointReadyFutures)
[2024-04-11 08:19:14,210] INFO authorizerStart completed for endpoint INTERNAL. Endpoint is now READY. (org.apache.kafka.server.network.EndpointReadyFutures)
[2024-04-11 08:19:14,210] INFO [SocketServer listenerType=BROKER, nodeId=0] Enabling request processing. (kafka.network.SocketServer)
[2024-04-11 08:19:14,210] INFO Awaiting socket connections on 0.0.0.0:9094. (kafka.network.DataPlaneAcceptor)
[2024-04-11 08:19:14,212] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor)
[2024-04-11 08:19:14,215] INFO [BrokerServer id=0] Waiting for all of the authorizer futures to be completed (kafka.server.BrokerServer)
[2024-04-11 08:19:14,215] INFO [BrokerServer id=0] Finished waiting for all of the authorizer futures to be completed (kafka.server.BrokerServer)
[2024-04-11 08:19:14,215] INFO [BrokerServer id=0] Waiting for all of the SocketServer Acceptors to be started (kafka.server.BrokerServer)
[2024-04-11 08:19:14,215] INFO [BrokerServer id=0] Finished waiting for all of the SocketServer Acceptors to be started (kafka.server.BrokerServer)
[2024-04-11 08:19:14,215] INFO [BrokerServer id=0] Transition from STARTING to STARTED (kafka.server.BrokerServer)
[2024-04-11 08:19:14,216] INFO Kafka version: 3.7.0 (org.apache.kafka.common.utils.AppInfoParser)
[2024-04-11 08:19:14,216] INFO Kafka commitId: 2ae524ed625438c5 (org.apache.kafka.common.utils.AppInfoParser)
[2024-04-11 08:19:14,216] INFO Kafka startTimeMs: 1712823554215 (org.apache.kafka.common.utils.AppInfoParser)
[2024-04-11 08:19:14,218] INFO [KafkaRaftServer nodeId=0] Kafka Server started (kafka.server.KafkaRaftServer)
[2024-04-11 08:19:14,396] INFO [QuorumController id=0] Replayed BrokerRegistrationChangeRecord modifying the registration for broker 0: BrokerRegistrationChangeRecord(brokerId=0, brokerEpoch=19, fenced=-1, inControlledShutdown=0, logDirs=[]) (org.apache.kafka.controller.ClusterControlManager)
[2024-04-11 08:19:21,736] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:27704-0) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-04-11 08:19:31,407] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:22160-0) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-04-11 08:19:44,589] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:26571-0) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-04-11 08:19:53,535] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:49658-1) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-04-11 08:20:05,430] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:33371-1) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-04-11 08:20:15,902] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:39017-1) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-04-11 08:20:23,086] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:12135-2) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-04-11 08:20:28,509] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:35767-2) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-04-11 08:20:39,900] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:40010-2) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-04-11 08:20:47,806] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:1086-3) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-04-11 08:20:58,713] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:22710-3) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-04-11 08:21:08,996] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:25964-3) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-04-11 08:21:13,870] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:56383-4) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-04-11 08:21:27,634] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:13770-4) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-04-11 08:21:36,243] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:8170-4) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)

Are you using any custom parameters or values?

   --set kafkaVersion=3.7.0 \
    --set replicaCount=3 \
    --set kafka.persistence.enabled=false \
    --set kafka.kafkaConfigOverrides=transaction.state.log.replication.factor=3 \
    --set kafka.kafkaConfigOverrides=transaction.state.log.min.isr=2 \
    --set kafka.kafkaConfigOverrides=default.replication.factor=3 \
    --set kafka.kafkaConfigOverrides=num.io.threads=8 \
    --set kafka.kafkaConfigOverrides=num.network.threads=3 \
    --set kafka.kafkaConfigOverrides=log.message.format.version=3.7 \
    --set kafka.kafkaConfigOverrides=inter.broker.protocol.version=3.7 \
    --set kafka.kafkaConfigOverrides=offsets.topic.replication.factor=3 \
    --set kafka.kafkaConfigOverrides=transaction.state.log.num.partitions=50 \
    --set-string labels.app.kubernetes.io/managed-by=Helm \
    --set-string labels.meta.helm.sh/release-name=kafka \
    --set-string labels.meta.helm.sh/release-namespace=pre-logging \
    --set broker.persistence.enabled=false \
    --set controller.persistence.enabled=false \
    --set broker.logPersistence.enabled=false \
    --set kraft.enabled=true \
    --set listeners.client.protocol=SASL_PLAINTEXT \
    --set listeners.controller.protocol=SASL_PLAINTEXT \
    --set listeners.interbroker.protocol=SASL_PLAINTEXT \
    --set listeners.external.protocol=SASL_PLAINTEXT \
    --set sasl.interbroker.user=inter_broker_user \
    --set sasl.interbroker.password=as123456

What is the expected behavior?

1.I hope that the K8S console logs will no longer report errors, and I can create, modify queues, and send messages through the account link I specified, Kafka
2.The startup log of the container keeps reporting errors, and the program cannot connect to Kafka. This problem has been bothering me for a long time, and I have been unable to solve it since upgrading BitNami. I hope the official can provide the correct execution command so that I can minimize the entire 3-node cluster to run

What do you see instead?

The startup log of the container keeps reporting errors, and the program cannot connect to Kafka. This problem has been bothering me for a long time, and I have been unable to solve it since upgrading BitNami. I hope the official can provide the correct execution command so that I can minimize the entire 3-node cluster to run

[2024-04-11 08:21:36,243] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:8170-4) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-04-11 08:21:40,546] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:24844-5) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-04-11 08:21:49,483] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:47120-5) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-04-11 08:21:57,800] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:2117-5) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-04-11 08:22:06,850] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:8087-6) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-04-11 08:22:19,502] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:21459-6) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-04-11 08:22:25,834] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:61184-6) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-04-11 08:22:32,998] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:5865-7) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-04-11 08:22:43,156] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:39664-7) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-04-11 08:22:52,979] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:21451-7) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-04-11 08:22:57,822] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:22328-8) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-04-11 08:23:05,658] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:53036-8) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-04-11 08:23:11,914] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:54862-8) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-04-11 08:23:24,808] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:59233-9) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-04-11 08:23:29,274] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:26227-9) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-04-11 08:23:39,416] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:55993-9) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-04-11 08:23:49,165] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:64759-10) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-04-11 08:23:52,427] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:55838-10) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-04-11 08:23:59,767] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:34918-10) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-04-11 08:24:11,305] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:46035-11) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-04-11 08:24:23,912] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:21754-11) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-04-11 08:24:34,274] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:59329-11) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-04-11 08:24:40,088] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:30908-12) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-04-11 08:24:48,802] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:62905-12) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-04-11 08:24:57,730] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:5415-12) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-04-11 08:25:00,803] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:52356-13) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-04-11 08:25:04,135] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:10035-13) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-04-11 08:25:19,598] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:7773-13) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-04-11 08:25:22,767] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:7096-14) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-04-11 08:25:35,529] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:10439-14) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-04-11 08:25:46,524] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:27627-14) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-04-11 08:25:55,522] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:24311-15) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-04-11 08:26:03,502] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:58161-15) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-04-11 08:26:10,798] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:18884-15) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-04-11 08:26:14,848] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:38287-16) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-04-11 08:26:24,619] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:28397-16) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-04-11 08:26:29,116] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.22.111.93 (channelId=10.244.140.74:9092-172.22.111.93:23666-16) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)

Additional information

The startup log of the container keeps reporting errors, and the program cannot connect to Kafka. This problem has been bothering me for a long time, and I have been unable to solve it since upgrading BitNami. I hope the official can provide the correct execution command so that I can minimize the entire 3-node cluster to run

@zjf88865 zjf88865 added the tech-issues The user has a technical issue about an application label Apr 11, 2024
@github-actions github-actions bot added the triage Triage is needed label Apr 11, 2024
@javsalgar javsalgar changed the title Unexpected Kafka request of type METADATA during SASL handshake. [bitnami/kafka] Unexpected Kafka request of type METADATA during SASL handshake. Apr 12, 2024
@github-actions github-actions bot removed the triage Triage is needed label Apr 12, 2024
@github-actions github-actions bot assigned rafariossaa and unassigned javsalgar Apr 12, 2024
@rafariossaa
Copy link
Contributor

Hi,
I deployed a chart using your command and I was not able to reproduce the issue.
I got not error in the deployment and then I could create a client and a producer and send/read messages.
I used the following to create the producer/consumer clients:

To connect a client to your Kafka, you need to create the 'client.properties' configuration files with the content below:

security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-256
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
    username="user1" \
    password="$(kubectl get secret kafka-user-passwords --namespace default -o jsonpath='{.data.client-passwords}' | base64 -d | cut -d , -f 1)";

To create a pod that you can use as a Kafka client run the following commands:

    kubectl run kafka-client --restart='Never' --image docker.io/bitnami/kafka:3.7.0-debian-12-r2 --namespace default --command -- sleep infinity
    kubectl cp --namespace default /path/to/client.properties kafka-client:/tmp/client.properties
    kubectl exec --tty -i kafka-client --namespace default -- bash

    PRODUCER:
        kafka-console-producer.sh \
            --producer.config /tmp/client.properties \
            --broker-list kafka-controller-0.kafka-controller-headless.default.svc.cluster.local:9092,kafka-controller-1.kafka-controller-headless.default.svc.cluster.local:9092,kafka-controller-2.kafka-controller-headless.default.svc.cluster.local:9092 \
            --topic test

    CONSUMER:
        kafka-console-consumer.sh \
            --consumer.config /tmp/client.properties \
            --bootstrap-server kafka.default.svc.cluster.local:9092 \
            --topic test \
            --from-beginning

@zjf88865
Copy link
Author

Hi, I deployed a chart using your command and I was not able to reproduce the issue. I got not error in the deployment and then I could create a client and a producer and send/read messages. I used the following to create the producer/consumer clients:

To connect a client to your Kafka, you need to create the 'client.properties' configuration files with the content below:

security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-256
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
    username="user1" \
    password="$(kubectl get secret kafka-user-passwords --namespace default -o jsonpath='{.data.client-passwords}' | base64 -d | cut -d , -f 1)";

To create a pod that you can use as a Kafka client run the following commands:

    kubectl run kafka-client --restart='Never' --image docker.io/bitnami/kafka:3.7.0-debian-12-r2 --namespace default --command -- sleep infinity
    kubectl cp --namespace default /path/to/client.properties kafka-client:/tmp/client.properties
    kubectl exec --tty -i kafka-client --namespace default -- bash

    PRODUCER:
        kafka-console-producer.sh \
            --producer.config /tmp/client.properties \
            --broker-list kafka-controller-0.kafka-controller-headless.default.svc.cluster.local:9092,kafka-controller-1.kafka-controller-headless.default.svc.cluster.local:9092,kafka-controller-2.kafka-controller-headless.default.svc.cluster.local:9092 \
            --topic test

    CONSUMER:
        kafka-console-consumer.sh \
            --consumer.config /tmp/client.properties \
            --bootstrap-server kafka.default.svc.cluster.local:9092 \
            --topic test \
            --from-beginning

You need to upgrade helm repo to the latest version. Both my two servers and my home computer can reproduce the above problem. You can use the command: helm repo update, or delete bitnami (helm repo remove bitnami) and add it through the command: helm repo add bitnami https://charts.bitnami.com/bitnami Then try again, because the new version has mandatory Sasl security verification, it cannot be started without setting it. Can you give me the allowed clusters using the help command? Instead of running directly through kubectl, version 3.7.0 is required

@zhengbucuo
Copy link

Hi, I deployed a chart using your command and I was not able to reproduce the issue. I got not error in the deployment and then I could create a client and a producer and send/read messages. I used the following to create the producer/consumer clients:

To connect a client to your Kafka, you need to create the 'client.properties' configuration files with the content below:

security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-256
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
    username="user1" \
    password="$(kubectl get secret kafka-user-passwords --namespace default -o jsonpath='{.data.client-passwords}' | base64 -d | cut -d , -f 1)";

To create a pod that you can use as a Kafka client run the following commands:

    kubectl run kafka-client --restart='Never' --image docker.io/bitnami/kafka:3.7.0-debian-12-r2 --namespace default --command -- sleep infinity
    kubectl cp --namespace default /path/to/client.properties kafka-client:/tmp/client.properties
    kubectl exec --tty -i kafka-client --namespace default -- bash

    PRODUCER:
        kafka-console-producer.sh \
            --producer.config /tmp/client.properties \
            --broker-list kafka-controller-0.kafka-controller-headless.default.svc.cluster.local:9092,kafka-controller-1.kafka-controller-headless.default.svc.cluster.local:9092,kafka-controller-2.kafka-controller-headless.default.svc.cluster.local:9092 \
            --topic test

    CONSUMER:
        kafka-console-consumer.sh \
            --consumer.config /tmp/client.properties \
            --bootstrap-server kafka.default.svc.cluster.local:9092 \
            --topic test \
            --from-beginning

You need to upgrade helm repo to the latest version. Both my two servers and my home computer can reproduce the above problem. You can use the command: helm repo update, or delete bitnami (helm repo remove bitnami) and add it through the command: helm repo add bitnami https://charts.bitnami.com/bitnami Then try again, because the new version has mandatory Sasl security verification, it cannot be started without setting it. Can you give me the allowed clusters using the help command? Instead of running directly through kubectl, version 3.7.0 is required

There is indeed this issue, and I have also noticed it
my kafka log:
image

@rafariossaa
Copy link
Contributor

Hi,
This is what I used:

$ helm version
version.BuildInfo{Version:"v3.14.0", GitCommit:"3fc9f4b2638e76f26739cd77c7017139be81d0ea", GitTreeState:"clean", GoVersion:"go1.21.5"}
$ kubectl version --short
Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
Client Version: v1.27.1
Kustomize Version: v5.0.1
Server Version: v1.28.3
$ helm repo list
NAME            URL                                                                          
bitnami         https://charts.bitnami.com/bitnami                                           
$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "bitnami" chart repository
Update Complete. ⎈Happy Helming!⎈
$ helm search repo bitnami/kafka
NAME            CHART VERSION   APP VERSION     DESCRIPTION
bitnami/kafka   28.0.4          3.7.0           Apache Kafka is a distributed streaming platfor...
helm install kafka bitnami/kafka --namespace pre-logging \
    --set kafkaVersion=3.7.0 \
    --set replicaCount=3 \
    --set kafka.persistence.enabled=false \
    --set kafka.kafkaConfigOverrides=transaction.state.log.replication.factor=3 \
    --set kafka.kafkaConfigOverrides=transaction.state.log.min.isr=2 \
    --set kafka.kafkaConfigOverrides=default.replication.factor=3 \
    --set kafka.kafkaConfigOverrides=num.io.threads=8 \
    --set kafka.kafkaConfigOverrides=num.network.threads=3 \
    --set kafka.kafkaConfigOverrides=log.message.format.version=3.7 \
    --set kafka.kafkaConfigOverrides=inter.broker.protocol.version=3.7 \
    --set kafka.kafkaConfigOverrides=offsets.topic.replication.factor=3 \
    --set kafka.kafkaConfigOverrides=transaction.state.log.num.partitions=50 \
    --set-string labels.app.kubernetes.io/managed-by=Helm \
    --set-string labels.meta.helm.sh/release-name=kafka \
    --set-string labels.meta.helm.sh/release-namespace=pre-logging \
    --set broker.persistence.enabled=false \
    --set controller.persistence.enabled=false \
    --set broker.logPersistence.enabled=false \
    --set kraft.enabled=true \
    --set listeners.client.protocol=SASL_PLAINTEXT \
    --set listeners.controller.protocol=SASL_PLAINTEXT \
    --set listeners.interbroker.protocol=SASL_PLAINTEXT \
    --set listeners.external.protocol=SASL_PLAINTEXT \
    --set sasl.interbroker.user=inter_broker_user \
    --set sasl.interbroker.password=as123456
kubectl get pods -n pre-logging
NAME                 READY   STATUS    RESTARTS   AGE
kafka-controller-0   1/1     Running   0          57s
kafka-controller-1   1/1     Running   0          57s
kafka-controller-2   1/1     Running   0          57s
kubectl logs -n pre-logging kafka-controller-0 
Defaulted container "kafka" out of: kafka, kafka-init (init)
kafka 11:01:12.51 INFO  ==> 
kafka 11:01:12.51 INFO  ==> Welcome to the Bitnami kafka container
kafka 11:01:12.51 INFO  ==> Subscribe to project updates by watching https://github.com/bitnami/containers
kafka 11:01:12.52 INFO  ==> Submit issues and feature requests at https://github.com/bitnami/containers/issues
kafka 11:01:12.52 INFO  ==> Upgrade to Tanzu Application Catalog for production environments to access custom-configured and pre-packaged software components. Gain enhanced features, including Software Bill of Materials (SBOM), CVE scan result reports, and VEX documents. To learn more, visit https://bitnami.com/enterprise
kafka 11:01:12.52 INFO  ==> 
kafka 11:01:12.53 INFO  ==> ** Starting Kafka setup **
kafka 11:01:12.73 INFO  ==> Initializing KRaft storage metadata
kafka 11:01:12.73 INFO  ==> Adding KRaft SCRAM users at storage bootstrap
kafka 11:01:12.79 INFO  ==> Formatting storage directories to add metadata...
metaPropertiesEnsemble=MetaPropertiesEnsemble(metadataLogDir=Optional.empty, dirs={/bitnami/kafka/data: EMPTY})
Formatting /bitnami/kafka/data with metadata.version 3.7-IV4.
kafka 11:01:19.82 INFO  ==> ** Kafka setup finished! **

kafka 11:01:19.85 INFO  ==> ** Starting Kafka **
[2024-04-29 11:01:22,997] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2024-04-29 11:01:24,400] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util)
[2024-04-29 11:01:25,391] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler)
[2024-04-29 11:01:25,397] INFO [ControllerServer id=0] Starting controller (kafka.server.ControllerServer)
[2024-04-29 11:01:27,410] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas)
[2024-04-29 11:01:27,786] INFO Successfully logged in. (org.apache.kafka.common.security.authenticator.AbstractLogin)
[2024-04-29 11:01:27,883] INFO [SocketServer listenerType=CONTROLLER, nodeId=0] Created data-plane acceptor and processors for endpoint : ListenerName(CONTROLLER) (kafka.network.SocketServer)
[2024-04-29 11:01:27,889] INFO CONTROLLER: resolved wildcard host to kafka-controller-0.kafka-controller-headless.pre-logging.svc.cluster.local (org.apache.kafka.metadata.ListenerInfo)
[2024-04-29 11:01:27,901] INFO authorizerStart completed for endpoint CONTROLLER. Endpoint is now READY. (org.apache.kafka.server.network.EndpointReadyFutures)
[2024-04-29 11:01:27,902] INFO [SharedServer id=0] Starting SharedServer (kafka.server.SharedServer)
[2024-04-29 11:01:28,235] INFO [LogLoader partition=__cluster_metadata-0, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$)
[2024-04-29 11:01:28,236] INFO [LogLoader partition=__cluster_metadata-0, dir=/bitnami/kafka/data] Reloading from producer snapshot and rebuilding producer state from offset 0 (kafka.log.UnifiedLog$)
[2024-04-29 11:01:28,236] INFO [LogLoader partition=__cluster_metadata-0, dir=/bitnami/kafka/data] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 0 (kafka.log.UnifiedLog$)
[2024-04-29 11:01:28,416] INFO Initialized snapshots with IDs Set() from /bitnami/kafka/data/__cluster_metadata-0 (kafka.raft.KafkaMetadataLog$)
[2024-04-29 11:01:28,507] INFO [raft-expiration-reaper]: Starting (kafka.raft.TimingWheelExpirationService$ExpiredOperationReaper)
[2024-04-29 11:01:29,530] INFO [RaftManager id=0] Completed transition to Unattached(epoch=0, voters=[0, 1, 2], electionTimeoutMs=1296) from null (org.apache.kafka.raft.QuorumState)
[2024-04-29 11:01:29,586] INFO [kafka-0-raft-outbound-request-thread]: Starting (org.apache.kafka.raft.KafkaNetworkChannel$SendThread)
[2024-04-29 11:01:29,586] INFO [kafka-0-raft-io-thread]: Starting (kafka.raft.KafkaRaftManager$RaftIoThread)
[2024-04-29 11:01:29,612] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:29,614] INFO [ControllerServer id=0] Waiting for controller quorum voters future (kafka.server.ControllerServer)
[2024-04-29 11:01:29,615] INFO [ControllerServer id=0] Finished waiting for controller quorum voters future (kafka.server.ControllerServer)
[2024-04-29 11:01:29,615] INFO [RaftManager id=0] Registered the listener org.apache.kafka.image.loader.MetadataLoader@1906519282 (org.apache.kafka.raft.KafkaRaftClient)
[2024-04-29 11:01:29,712] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:29,805] INFO [QuorumController id=0] Creating new QuorumController with clusterId tGotg5jFmyb2BaeIMMqDmh. (org.apache.kafka.controller.QuorumController)
[2024-04-29 11:01:29,806] INFO [RaftManager id=0] Registered the listener org.apache.kafka.controller.QuorumController$QuorumMetaLogListener@447991081 (org.apache.kafka.raft.KafkaRaftClient)
[2024-04-29 11:01:29,813] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:29,816] INFO [controller-0-ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-04-29 11:01:29,817] INFO [controller-0-ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-04-29 11:01:29,818] INFO [controller-0-ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-04-29 11:01:29,883] INFO [controller-0-ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-04-29 11:01:29,913] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:29,923] INFO [ExpirationReaper-0-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-04-29 11:01:29,933] INFO [RaftManager id=0] Completed transition to CandidateState(localId=0, epoch=1, retries=1, voteStates={0=GRANTED, 1=UNRECORDED, 2=UNRECORDED}, highWatermark=Optional.empty, electionTimeoutMs=1371) from Unattached(epoch=0, voters=[0, 1, 2], electionTimeoutMs=1296) (org.apache.kafka.raft.QuorumState)
[2024-04-29 11:01:29,936] INFO [QuorumController id=0] In the new epoch 1, the leader is (none). (org.apache.kafka.controller.QuorumController)
[2024-04-29 11:01:30,081] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:30,086] INFO [ControllerServer id=0] Waiting for the controller metadata publishers to be installed (kafka.server.ControllerServer)
[2024-04-29 11:01:30,086] INFO [ControllerServer id=0] Finished waiting for the controller metadata publishers to be installed (kafka.server.ControllerServer)
[2024-04-29 11:01:30,086] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:30,087] INFO [SocketServer listenerType=CONTROLLER, nodeId=0] Enabling request processing. (kafka.network.SocketServer)
[2024-04-29 11:01:30,093] INFO Awaiting socket connections on 0.0.0.0:9093. (kafka.network.DataPlaneAcceptor)
[2024-04-29 11:01:30,187] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:30,287] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:30,291] INFO [controller-0-to-controller-registration-channel-manager]: Starting (kafka.server.NodeToControllerRequestThread)
[2024-04-29 11:01:30,292] INFO [ControllerServer id=0] Waiting for all of the authorizer futures to be completed (kafka.server.ControllerServer)
[2024-04-29 11:01:30,293] INFO [ControllerRegistrationManager id=0 incarnation=FsUDjHGWSlqvpAsMsesrtA] initialized channel manager. (kafka.server.ControllerRegistrationManager)
[2024-04-29 11:01:30,293] INFO [ControllerServer id=0] Finished waiting for all of the authorizer futures to be completed (kafka.server.ControllerServer)
[2024-04-29 11:01:30,293] INFO [ControllerServer id=0] Waiting for all of the SocketServer Acceptors to be started (kafka.server.ControllerServer)
[2024-04-29 11:01:30,293] INFO [ControllerServer id=0] Finished waiting for all of the SocketServer Acceptors to be started (kafka.server.ControllerServer)
[2024-04-29 11:01:30,294] INFO [ControllerRegistrationManager id=0 incarnation=FsUDjHGWSlqvpAsMsesrtA] maybeSendControllerRegistration: cannot register yet because the metadata version is still 3.0-IV1, which does not support KIP-919 controller registration. (kafka.server.ControllerRegistrationManager)
[2024-04-29 11:01:30,295] INFO [BrokerServer id=0] Transition from SHUTDOWN to STARTING (kafka.server.BrokerServer)
[2024-04-29 11:01:30,297] INFO [BrokerServer id=0] Starting broker (kafka.server.BrokerServer)
[2024-04-29 11:01:30,388] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:30,390] INFO [broker-0-ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-04-29 11:01:30,391] INFO [broker-0-ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-04-29 11:01:30,391] INFO [broker-0-ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-04-29 11:01:30,392] INFO [broker-0-ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-04-29 11:01:30,488] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:30,589] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:30,591] INFO [BrokerServer id=0] Waiting for controller quorum voters future (kafka.server.BrokerServer)
[2024-04-29 11:01:30,591] INFO [BrokerServer id=0] Finished waiting for controller quorum voters future (kafka.server.BrokerServer)
[2024-04-29 11:01:30,595] INFO [broker-0-to-controller-forwarding-channel-manager]: Starting (kafka.server.NodeToControllerRequestThread)
[2024-04-29 11:01:30,689] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:30,790] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:30,890] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:30,991] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:31,095] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:31,213] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:31,280] INFO [RaftManager id=0] Election has timed out, backing off for 0ms before becoming a candidate again (org.apache.kafka.raft.KafkaRaftClient)
[2024-04-29 11:01:31,280] INFO [RaftManager id=0] Re-elect as candidate after election backoff has completed (org.apache.kafka.raft.KafkaRaftClient)
[2024-04-29 11:01:31,286] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas)
[2024-04-29 11:01:31,292] INFO Successfully logged in. (org.apache.kafka.common.security.authenticator.AbstractLogin)
[2024-04-29 11:01:31,295] INFO [RaftManager id=0] Completed transition to CandidateState(localId=0, epoch=2, retries=2, voteStates={0=GRANTED, 1=UNRECORDED, 2=UNRECORDED}, highWatermark=Optional.empty, electionTimeoutMs=1557) from CandidateState(localId=0, epoch=1, retries=1, voteStates={0=GRANTED, 1=REJECTED, 2=UNRECORDED}, highWatermark=Optional.empty, electionTimeoutMs=1371) (org.apache.kafka.raft.QuorumState)
[2024-04-29 11:01:31,295] INFO [QuorumController id=0] In the new epoch 2, the leader is (none). (org.apache.kafka.controller.QuorumController)
[2024-04-29 11:01:31,296] INFO Successfully logged in. (org.apache.kafka.common.security.authenticator.AbstractLogin)
[2024-04-29 11:01:31,381] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:31,382] INFO [SocketServer listenerType=BROKER, nodeId=0] Created data-plane acceptor and processors for endpoint : ListenerName(CLIENT) (kafka.network.SocketServer)
[2024-04-29 11:01:31,384] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas)
[2024-04-29 11:01:31,388] INFO Successfully logged in. (org.apache.kafka.common.security.authenticator.AbstractLogin)
[2024-04-29 11:01:31,388] INFO Successfully logged in. (org.apache.kafka.common.security.authenticator.AbstractLogin)
[2024-04-29 11:01:31,481] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:31,487] INFO [SocketServer listenerType=BROKER, nodeId=0] Created data-plane acceptor and processors for endpoint : ListenerName(INTERNAL) (kafka.network.SocketServer)
[2024-04-29 11:01:31,581] INFO [RaftManager id=0] Completed transition to Leader(localId=0, epoch=2, epochStartOffset=0, highWatermark=Optional.empty, voterStates={0=ReplicaState(nodeId=0, endOffset=Optional.empty, lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=true), 1=ReplicaState(nodeId=1, endOffset=Optional.empty, lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=false), 2=ReplicaState(nodeId=2, endOffset=Optional.empty, lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=false)}) from CandidateState(localId=0, epoch=2, retries=2, voteStates={0=GRANTED, 1=GRANTED, 2=UNRECORDED}, highWatermark=Optional.empty, electionTimeoutMs=1557) (org.apache.kafka.raft.QuorumState)
[2024-04-29 11:01:31,582] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:31,583] INFO [broker-0-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka-controller-0.kafka-controller-headless.pre-logging.svc.cluster.local:9093 (id: 0 rack: null) (kafka.server.NodeToControllerRequestThread)
[2024-04-29 11:01:31,584] INFO [broker-0-to-controller-alter-partition-channel-manager]: Starting (kafka.server.NodeToControllerRequestThread)
[2024-04-29 11:01:31,585] INFO [broker-0-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka-controller-0.kafka-controller-headless.pre-logging.svc.cluster.local:9093 (id: 0 rack: null) (kafka.server.NodeToControllerRequestThread)
[2024-04-29 11:01:31,601] INFO [broker-0-to-controller-directory-assignments-channel-manager]: Starting (kafka.server.NodeToControllerRequestThread)
[2024-04-29 11:01:31,602] INFO [broker-0-to-controller-directory-assignments-channel-manager]: Recorded new controller, from now on will use node kafka-controller-0.kafka-controller-headless.pre-logging.svc.cluster.local:9093 (id: 0 rack: null) (kafka.server.NodeToControllerRequestThread)
[2024-04-29 11:01:31,615] INFO [controller-0-to-controller-registration-channel-manager]: Recorded new controller, from now on will use node kafka-controller-0.kafka-controller-headless.pre-logging.svc.cluster.local:9093 (id: 0 rack: null) (kafka.server.NodeToControllerRequestThread)
[2024-04-29 11:01:31,682] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:31,783] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:31,799] INFO [ExpirationReaper-0-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-04-29 11:01:31,801] INFO [ExpirationReaper-0-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-04-29 11:01:31,881] INFO [ExpirationReaper-0-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-04-29 11:01:31,883] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:31,883] INFO [ExpirationReaper-0-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-04-29 11:01:31,886] INFO [ExpirationReaper-0-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-04-29 11:01:31,984] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:32,001] INFO [ExpirationReaper-0-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-04-29 11:01:32,002] INFO [ExpirationReaper-0-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-04-29 11:01:32,084] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:32,185] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:32,285] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:32,295] INFO [QuorumController id=0] registerBroker: event failed with NotControllerException in 657 microseconds. (org.apache.kafka.controller.QuorumController)
[2024-04-29 11:01:32,386] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:32,391] INFO Unable to read the broker epoch in /bitnami/kafka/data. (kafka.log.LogManager)
[2024-04-29 11:01:32,393] INFO [broker-0-to-controller-heartbeat-channel-manager]: Starting (kafka.server.NodeToControllerRequestThread)
[2024-04-29 11:01:32,394] INFO [broker-0-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-controller-0.kafka-controller-headless.pre-logging.svc.cluster.local:9093 (id: 0 rack: null) (kafka.server.NodeToControllerRequestThread)
[2024-04-29 11:01:32,399] INFO [QuorumController id=0] registerBroker: event failed with NotControllerException in 117 microseconds. (org.apache.kafka.controller.QuorumController)
[2024-04-29 11:01:32,401] INFO [BrokerLifecycleManager id=0] Incarnation IiE1Pz-wSVKvIDICKRcQsQ of broker 0 in cluster tGotg5jFmyb2BaeIMMqDmh is now STARTING. (kafka.server.BrokerLifecycleManager)
[2024-04-29 11:01:32,486] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:32,492] INFO [QuorumController id=0] registerBroker: event failed with NotControllerException in 102 microseconds. (org.apache.kafka.controller.QuorumController)
[2024-04-29 11:01:32,494] INFO [QuorumController id=0] registerBroker: event failed with NotControllerException in 81 microseconds. (org.apache.kafka.controller.QuorumController)
[2024-04-29 11:01:32,496] INFO [NodeToControllerChannelManager id=0 name=heartbeat] Client requested disconnect from node 0 (org.apache.kafka.clients.NetworkClient)
[2024-04-29 11:01:32,581] INFO [broker-0-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-controller-0.kafka-controller-headless.pre-logging.svc.cluster.local:9093 (id: 0 rack: null) (kafka.server.NodeToControllerRequestThread)
[2024-04-29 11:01:32,585] INFO [RaftManager id=0] High watermark set to LogOffsetMetadata(offset=1, metadata=Optional[(segmentBaseOffset=0,relativePositionInSegment=106)]) for the first time for epoch 2 based on indexOfHw 1 and voters [ReplicaState(nodeId=0, endOffset=Optional[LogOffsetMetadata(offset=1, metadata=Optional[(segmentBaseOffset=0,relativePositionInSegment=106)])], lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=true), ReplicaState(nodeId=1, endOffset=Optional[LogOffsetMetadata(offset=1, metadata=Optional[(segmentBaseOffset=0,relativePositionInSegment=106)])], lastFetchTimestamp=1714388492582, lastCaughtUpTimestamp=1714388492582, hasAcknowledgedLeader=true), ReplicaState(nodeId=2, endOffset=Optional[LogOffsetMetadata(offset=0, metadata=Optional[(segmentBaseOffset=0,relativePositionInSegment=0)])], lastFetchTimestamp=1714388492181, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=true)] (org.apache.kafka.raft.LeaderState)
[2024-04-29 11:01:32,705] INFO [MetadataLoader id=0] initializeNewPublishers: The loader is still catching up because we have loaded up to offset -1, but the high water mark is 1 (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:32,785] INFO [QuorumController id=0] registerBroker: event failed with NotControllerException in 126 microseconds. (org.apache.kafka.controller.QuorumController)
[2024-04-29 11:01:32,786] INFO [QuorumController id=0] registerBroker: event failed with NotControllerException in 100 microseconds. (org.apache.kafka.controller.QuorumController)
[2024-04-29 11:01:32,881] INFO [MetadataLoader id=0] initializeNewPublishers: The loader is still catching up because we have loaded up to offset -1, but the high water mark is 1 (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:32,881] INFO [QuorumController id=0] registerBroker: event failed with NotControllerException in 153 microseconds. (org.apache.kafka.controller.QuorumController)
[2024-04-29 11:01:32,881] INFO [ExpirationReaper-0-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-04-29 11:01:32,882] INFO [NodeToControllerChannelManager id=0 name=heartbeat] Client requested disconnect from node 0 (org.apache.kafka.clients.NetworkClient)
[2024-04-29 11:01:32,882] INFO [broker-0-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-controller-0.kafka-controller-headless.pre-logging.svc.cluster.local:9093 (id: 0 rack: null) (kafka.server.NodeToControllerRequestThread)
[2024-04-29 11:01:32,887] INFO [QuorumController id=0] Becoming the active controller at epoch 2, next write offset 1. (org.apache.kafka.controller.QuorumController)
[2024-04-29 11:01:32,888] INFO [MetadataLoader id=0] maybePublishMetadata(LOG_DELTA): The loader is still catching up because we have not loaded a controller record as of offset 0 and high water mark is 1 (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:32,981] INFO [broker-0-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-controller-0.kafka-controller-headless.pre-logging.svc.cluster.local:9093 (id: 0 rack: null) (kafka.server.NodeToControllerRequestThread)
[2024-04-29 11:01:32,981] INFO [MetadataLoader id=0] initializeNewPublishers: The loader finished catching up to the current high water mark of 1 (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:32,992] WARN [QuorumController id=0] Performing controller activation. The metadata log appears to be empty. Appending 3 bootstrap record(s) in metadata transaction at metadata.version 3.7-IV4 from bootstrap source 'the binary bootstrap metadata file: /bitnami/kafka/data/bootstrap.checkpoint'. Setting the ZK migration state to NONE since this is a de-novo KRaft cluster. (org.apache.kafka.controller.QuorumController)
[2024-04-29 11:01:32,996] INFO [QuorumController id=0] Replayed BeginTransactionRecord(name='Bootstrap records') at offset 1. (org.apache.kafka.controller.OffsetControlManager)
[2024-04-29 11:01:33,081] INFO [QuorumController id=0] Replayed a FeatureLevelRecord setting metadata version to 3.7-IV4 (org.apache.kafka.controller.FeatureControlManager)
[2024-04-29 11:01:33,085] INFO [QuorumController id=0] Replayed UserScramCredentialRecord creating new entry for user1 with mechanism SCRAM_SHA_256. (org.apache.kafka.controller.ScramControlManager)
[2024-04-29 11:01:33,085] INFO [QuorumController id=0] Replayed UserScramCredentialRecord creating new entry for user1 with mechanism SCRAM_SHA_512. (org.apache.kafka.controller.ScramControlManager)
[2024-04-29 11:01:33,086] INFO [QuorumController id=0] Replayed EndTransactionRecord() at offset 6. (org.apache.kafka.controller.OffsetControlManager)
[2024-04-29 11:01:33,185] INFO [MetadataLoader id=0] InitializeNewPublishers: initializing SnapshotGenerator with a snapshot at offset 0 (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:33,186] INFO [MetadataLoader id=0] InitializeNewPublishers: initializing KRaftMetadataCachePublisher with a snapshot at offset 0 (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:33,186] INFO [MetadataLoader id=0] InitializeNewPublishers: initializing FeaturesPublisher with a snapshot at offset 0 (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:33,186] INFO [ControllerServer id=0] Loaded new metadata Features(version=3.0-IV1, finalizedFeatures={metadata.version=1}, finalizedFeaturesEpoch=0). (org.apache.kafka.metadata.publisher.FeaturesPublisher)
[2024-04-29 11:01:33,186] INFO [MetadataLoader id=0] InitializeNewPublishers: initializing ControllerRegistrationsPublisher with a snapshot at offset 0 (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:33,186] INFO [MetadataLoader id=0] InitializeNewPublishers: initializing ControllerRegistrationManager with a snapshot at offset 0 (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:33,187] INFO [MetadataLoader id=0] InitializeNewPublishers: initializing DynamicConfigPublisher controller id=0 with a snapshot at offset 0 (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:33,187] INFO [ControllerRegistrationManager id=0 incarnation=FsUDjHGWSlqvpAsMsesrtA] maybeSendControllerRegistration: cannot register yet because the metadata version is still 3.0-IV1, which does not support KIP-919 controller registration. (kafka.server.ControllerRegistrationManager)
[2024-04-29 11:01:33,189] INFO [MetadataLoader id=0] InitializeNewPublishers: initializing DynamicClientQuotaPublisher controller id=0 with a snapshot at offset 0 (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:33,190] INFO [BrokerServer id=0] Waiting for the broker metadata publishers to be installed (kafka.server.BrokerServer)
[2024-04-29 11:01:33,190] INFO [MetadataLoader id=0] InitializeNewPublishers: initializing ScramPublisher controller id=0 with a snapshot at offset 0 (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:33,191] INFO [MetadataLoader id=0] InitializeNewPublishers: initializing DelegationTokenPublisher controller id=0 with a snapshot at offset 0 (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:33,194] INFO [QuorumController id=0] Replayed initial RegisterBrokerRecord for broker 2: RegisterBrokerRecord(brokerId=2, isMigratingZkBroker=false, incarnationId=WIFeiDIzSAaoIg5oOBq6-g, brokerEpoch=7, endPoints=[BrokerEndpoint(name='INTERNAL', host='kafka-controller-2.kafka-controller-headless.pre-logging.svc.cluster.local', port=9094, securityProtocol=2), BrokerEndpoint(name='CLIENT', host='kafka-controller-2.kafka-controller-headless.pre-logging.svc.cluster.local', port=9092, securityProtocol=2)], features=[BrokerFeature(name='metadata.version', minSupportedVersion=1, maxSupportedVersion=19)], rack=null, fenced=true, inControlledShutdown=false, logDirs=[KsIvMjIYpSOqz-7M94ssLg]) (org.apache.kafka.controller.ClusterControlManager)
[2024-04-29 11:01:33,195] INFO [QuorumController id=0] Replayed initial RegisterBrokerRecord for broker 0: RegisterBrokerRecord(brokerId=0, isMigratingZkBroker=false, incarnationId=IiE1Pz-wSVKvIDICKRcQsQ, brokerEpoch=8, endPoints=[BrokerEndpoint(name='INTERNAL', host='kafka-controller-0.kafka-controller-headless.pre-logging.svc.cluster.local', port=9094, securityProtocol=2), BrokerEndpoint(name='CLIENT', host='kafka-controller-0.kafka-controller-headless.pre-logging.svc.cluster.local', port=9092, securityProtocol=2)], features=[BrokerFeature(name='metadata.version', minSupportedVersion=1, maxSupportedVersion=19)], rack=null, fenced=true, inControlledShutdown=false, logDirs=[d9lo71YK9ZzacyhfbVljZQ]) (org.apache.kafka.controller.ClusterControlManager)
[2024-04-29 11:01:33,196] INFO [QuorumController id=0] Replayed initial RegisterBrokerRecord for broker 1: RegisterBrokerRecord(brokerId=1, isMigratingZkBroker=false, incarnationId=i7We8oJuQ5OXmi8K1v5jwA, brokerEpoch=9, endPoints=[BrokerEndpoint(name='INTERNAL', host='kafka-controller-1.kafka-controller-headless.pre-logging.svc.cluster.local', port=9094, securityProtocol=2), BrokerEndpoint(name='CLIENT', host='kafka-controller-1.kafka-controller-headless.pre-logging.svc.cluster.local', port=9092, securityProtocol=2)], features=[BrokerFeature(name='metadata.version', minSupportedVersion=1, maxSupportedVersion=19)], rack=null, fenced=true, inControlledShutdown=false, logDirs=[3nW2dIcb9Z7elmqyAx1_PQ]) (org.apache.kafka.controller.ClusterControlManager)
[2024-04-29 11:01:33,282] INFO [MetadataLoader id=0] InitializeNewPublishers: initializing ControllerMetadataMetricsPublisher with a snapshot at offset 0 (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:33,283] INFO [MetadataLoader id=0] InitializeNewPublishers: initializing AclPublisher controller id=0 with a snapshot at offset 0 (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:33,284] INFO [BrokerServer id=0] Finished waiting for the broker metadata publishers to be installed (kafka.server.BrokerServer)
[2024-04-29 11:01:33,284] INFO [BrokerServer id=0] Waiting for the controller to acknowledge that we are caught up (kafka.server.BrokerServer)
[2024-04-29 11:01:33,284] INFO [MetadataLoader id=0] InitializeNewPublishers: initializing BrokerMetadataPublisher with a snapshot at offset 0 (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:33,285] INFO [BrokerMetadataPublisher id=0] Publishing initial metadata at offset OffsetAndEpoch(offset=0, epoch=2) with metadata.version 3.0-IV1. (kafka.server.metadata.BrokerMetadataPublisher)
[2024-04-29 11:01:33,286] INFO Loading logs from log dirs ArrayBuffer(/bitnami/kafka/data) (kafka.log.LogManager)
[2024-04-29 11:01:33,292] INFO No logs found to be loaded in /bitnami/kafka/data (kafka.log.LogManager)
[2024-04-29 11:01:33,388] INFO [BrokerLifecycleManager id=0] Successfully registered broker 0 with broker epoch 8 (kafka.server.BrokerLifecycleManager)
[2024-04-29 11:01:33,389] INFO Loaded 0 logs in 101ms (kafka.log.LogManager)
[2024-04-29 11:01:33,389] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
[2024-04-29 11:01:33,392] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
[2024-04-29 11:01:33,501] INFO Starting the log cleaner (kafka.log.LogCleaner)
[2024-04-29 11:01:34,095] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread)
[2024-04-29 11:01:34,183] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
[2024-04-29 11:01:34,186] INFO [GroupCoordinator 0]: Starting up. (kafka.coordinator.group.GroupCoordinator)
[2024-04-29 11:01:34,186] INFO [AddPartitionsToTxnSenderThread-0]: Starting (kafka.server.AddPartitionsToTxnManager)
[2024-04-29 11:01:34,190] INFO [GroupCoordinator 0]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
[2024-04-29 11:01:34,191] INFO [TransactionCoordinator id=0] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
[2024-04-29 11:01:34,194] INFO [TxnMarkerSenderThread-0]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2024-04-29 11:01:34,195] INFO [TransactionCoordinator id=0] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
[2024-04-29 11:01:34,288] INFO [QuorumController id=0] Replayed RegisterControllerRecord contaning ControllerRegistration(id=1, incarnationId=kOm2CgsKQVC-euvbsFphJQ, zkMigrationReady=false, listeners=[Endpoint(listenerName='CONTROLLER', securityProtocol=SASL_PLAINTEXT, host='kafka-controller-1.kafka-controller-headless.pre-logging.svc.cluster.local', port=9093)], supportedFeatures={metadata.version: 1-19}). (org.apache.kafka.controller.ClusterControlManager)
[2024-04-29 11:01:34,293] INFO [QuorumController id=0] Replayed RegisterBrokerRecord modifying the registration for broker 1: RegisterBrokerRecord(brokerId=1, isMigratingZkBroker=false, incarnationId=i7We8oJuQ5OXmi8K1v5jwA, brokerEpoch=13, endPoints=[BrokerEndpoint(name='INTERNAL', host='kafka-controller-1.kafka-controller-headless.pre-logging.svc.cluster.local', port=9094, securityProtocol=2), BrokerEndpoint(name='CLIENT', host='kafka-controller-1.kafka-controller-headless.pre-logging.svc.cluster.local', port=9092, securityProtocol=2)], features=[BrokerFeature(name='metadata.version', minSupportedVersion=1, maxSupportedVersion=19)], rack=null, fenced=true, inControlledShutdown=false, logDirs=[3nW2dIcb9Z7elmqyAx1_PQ]) (org.apache.kafka.controller.ClusterControlManager)
[2024-04-29 11:01:34,382] INFO [ControllerServer id=0] Loaded new metadata Features(version=3.7-IV4, finalizedFeatures={metadata.version=19}, finalizedFeaturesEpoch=6). (org.apache.kafka.metadata.publisher.FeaturesPublisher)
[2024-04-29 11:01:34,382] INFO [QuorumController id=0] Replayed RegisterBrokerRecord modifying the registration for broker 2: RegisterBrokerRecord(brokerId=2, isMigratingZkBroker=false, incarnationId=WIFeiDIzSAaoIg5oOBq6-g, brokerEpoch=14, endPoints=[BrokerEndpoint(name='INTERNAL', host='kafka-controller-2.kafka-controller-headless.pre-logging.svc.cluster.local', port=9094, securityProtocol=2), BrokerEndpoint(name='CLIENT', host='kafka-controller-2.kafka-controller-headless.pre-logging.svc.cluster.local', port=9092, securityProtocol=2)], features=[BrokerFeature(name='metadata.version', minSupportedVersion=1, maxSupportedVersion=19)], rack=null, fenced=true, inControlledShutdown=false, logDirs=[KsIvMjIYpSOqz-7M94ssLg]) (org.apache.kafka.controller.ClusterControlManager)
[2024-04-29 11:01:34,385] INFO [ControllerRegistrationManager id=0 incarnation=FsUDjHGWSlqvpAsMsesrtA] sendControllerRegistration: attempting to send ControllerRegistrationRequestData(controllerId=0, incarnationId=FsUDjHGWSlqvpAsMsesrtA, zkMigrationReady=false, listeners=[Listener(name='CONTROLLER', host='kafka-controller-0.kafka-controller-headless.pre-logging.svc.cluster.local', port=9093, securityProtocol=2)], features=[Feature(name='metadata.version', minSupportedVersion=1, maxSupportedVersion=19)]) (kafka.server.ControllerRegistrationManager)
[2024-04-29 11:01:34,385] INFO [BrokerMetadataPublisher id=0] Updating metadata.version to 19 at offset OffsetAndEpoch(offset=6, epoch=2). (kafka.server.metadata.BrokerMetadataPublisher)
[2024-04-29 11:01:34,389] INFO [QuorumController id=0] Replayed RegisterControllerRecord contaning ControllerRegistration(id=2, incarnationId=7Jz23xKrS5yUjMfx7mXyGg, zkMigrationReady=false, listeners=[Endpoint(listenerName='CONTROLLER', securityProtocol=SASL_PLAINTEXT, host='kafka-controller-2.kafka-controller-headless.pre-logging.svc.cluster.local', port=9093)], supportedFeatures={metadata.version: 1-19}). (org.apache.kafka.controller.ClusterControlManager)
[2024-04-29 11:01:34,481] INFO [BrokerMetadataPublisher id=0] Resending BrokerRegistration with existing incarnation-id to inform the controller about log directories in the broker following metadata update: previousMetadataVersion: 3.0-IV1 newMetadataVersion: 3.7-IV4 (kafka.server.metadata.BrokerMetadataPublisher)
[2024-04-29 11:01:34,482] INFO [QuorumController id=0] Replayed RegisterControllerRecord contaning ControllerRegistration(id=0, incarnationId=FsUDjHGWSlqvpAsMsesrtA, zkMigrationReady=false, listeners=[Endpoint(listenerName='CONTROLLER', securityProtocol=SASL_PLAINTEXT, host='kafka-controller-0.kafka-controller-headless.pre-logging.svc.cluster.local', port=9093)], supportedFeatures={metadata.version: 1-19}). (org.apache.kafka.controller.ClusterControlManager)
[2024-04-29 11:01:34,496] INFO [QuorumController id=0] Replayed RegisterBrokerRecord modifying the registration for broker 1: RegisterBrokerRecord(brokerId=1, isMigratingZkBroker=false, incarnationId=i7We8oJuQ5OXmi8K1v5jwA, brokerEpoch=17, endPoints=[BrokerEndpoint(name='INTERNAL', host='kafka-controller-1.kafka-controller-headless.pre-logging.svc.cluster.local', port=9094, securityProtocol=2), BrokerEndpoint(name='CLIENT', host='kafka-controller-1.kafka-controller-headless.pre-logging.svc.cluster.local', port=9092, securityProtocol=2)], features=[BrokerFeature(name='metadata.version', minSupportedVersion=1, maxSupportedVersion=19)], rack=null, fenced=true, inControlledShutdown=false, logDirs=[3nW2dIcb9Z7elmqyAx1_PQ]) (org.apache.kafka.controller.ClusterControlManager)
[2024-04-29 11:01:34,582] INFO [QuorumController id=0] Replayed RegisterBrokerRecord modifying the registration for broker 0: RegisterBrokerRecord(brokerId=0, isMigratingZkBroker=false, incarnationId=IiE1Pz-wSVKvIDICKRcQsQ, brokerEpoch=18, endPoints=[BrokerEndpoint(name='INTERNAL', host='kafka-controller-0.kafka-controller-headless.pre-logging.svc.cluster.local', port=9094, securityProtocol=2), BrokerEndpoint(name='CLIENT', host='kafka-controller-0.kafka-controller-headless.pre-logging.svc.cluster.local', port=9092, securityProtocol=2)], features=[BrokerFeature(name='metadata.version', minSupportedVersion=1, maxSupportedVersion=19)], rack=null, fenced=true, inControlledShutdown=false, logDirs=[d9lo71YK9ZzacyhfbVljZQ]) (org.apache.kafka.controller.ClusterControlManager)
[2024-04-29 11:01:34,601] INFO [ControllerRegistrationManager id=0 incarnation=FsUDjHGWSlqvpAsMsesrtA] Our registration has been persisted to the metadata log. (kafka.server.ControllerRegistrationManager)
[2024-04-29 11:01:34,601] INFO [ControllerRegistrationManager id=0 incarnation=FsUDjHGWSlqvpAsMsesrtA] RegistrationResponseHandler: controller acknowledged ControllerRegistrationRequest. (kafka.server.ControllerRegistrationManager)
[2024-04-29 11:01:34,602] INFO [QuorumController id=0] processBrokerHeartbeat: event failed with StaleBrokerEpochException in 136 microseconds. (org.apache.kafka.controller.QuorumController)
[2024-04-29 11:01:34,683] INFO [BrokerLifecycleManager id=0] Successfully registered broker 0 with broker epoch 18 (kafka.server.BrokerLifecycleManager)
[2024-04-29 11:01:34,685] INFO [QuorumController id=0] Replayed RegisterBrokerRecord modifying the registration for broker 0: RegisterBrokerRecord(brokerId=0, isMigratingZkBroker=false, incarnationId=IiE1Pz-wSVKvIDICKRcQsQ, brokerEpoch=20, endPoints=[BrokerEndpoint(name='INTERNAL', host='kafka-controller-0.kafka-controller-headless.pre-logging.svc.cluster.local', port=9094, securityProtocol=2), BrokerEndpoint(name='CLIENT', host='kafka-controller-0.kafka-controller-headless.pre-logging.svc.cluster.local', port=9092, securityProtocol=2)], features=[BrokerFeature(name='metadata.version', minSupportedVersion=1, maxSupportedVersion=19)], rack=null, fenced=true, inControlledShutdown=false, logDirs=[d9lo71YK9ZzacyhfbVljZQ]) (org.apache.kafka.controller.ClusterControlManager)
[2024-04-29 11:01:34,730] INFO [BrokerLifecycleManager id=0] Successfully registered broker 0 with broker epoch 20 (kafka.server.BrokerLifecycleManager)
[2024-04-29 11:01:34,731] INFO [QuorumController id=0] processBrokerHeartbeat: event failed with StaleBrokerEpochException in 70 microseconds. (org.apache.kafka.controller.QuorumController)
[2024-04-29 11:01:34,733] WARN [BrokerLifecycleManager id=0] Broker 0 sent a heartbeat request but received error STALE_BROKER_EPOCH. (kafka.server.BrokerLifecycleManager)
[2024-04-29 11:01:34,735] INFO [BrokerLifecycleManager id=0] The broker has caught up. Transitioning from STARTING to RECOVERY. (kafka.server.BrokerLifecycleManager)
[2024-04-29 11:01:34,735] INFO [BrokerServer id=0] Finished waiting for the controller to acknowledge that we are caught up (kafka.server.BrokerServer)
[2024-04-29 11:01:34,736] INFO [BrokerServer id=0] Waiting for the initial broker metadata update to be published (kafka.server.BrokerServer)
[2024-04-29 11:01:34,736] INFO [BrokerServer id=0] Finished waiting for the initial broker metadata update to be published (kafka.server.BrokerServer)
[2024-04-29 11:01:34,738] INFO KafkaConfig values: 
        advertised.listeners = CLIENT://kafka-controller-0.kafka-controller-headless.pre-logging.svc.cluster.local:9092,INTERNAL://kafka-controller-0.kafka-controller-headless.pre-logging.svc.cluster.local:9094
        alter.config.policy.class.name = null
        alter.log.dirs.replication.quota.window.num = 11
        alter.log.dirs.replication.quota.window.size.seconds = 1
        authorizer.class.name = 
        auto.create.topics.enable = true
        auto.include.jmx.reporter = true
        auto.leader.rebalance.enable = true
        background.threads = 10
        broker.heartbeat.interval.ms = 2000
        broker.id = 0
        broker.id.generation.enable = true
        broker.rack = null
        broker.session.timeout.ms = 9000
        client.quota.callback.class = null
        compression.type = producer
        connection.failed.authentication.delay.ms = 100
        connections.max.idle.ms = 600000
        connections.max.reauth.ms = 0
        control.plane.listener.name = null
        controlled.shutdown.enable = true
        controlled.shutdown.max.retries = 3
        controlled.shutdown.retry.backoff.ms = 5000
        controller.listener.names = CONTROLLER
        controller.quorum.append.linger.ms = 25
        controller.quorum.election.backoff.max.ms = 1000
        controller.quorum.election.timeout.ms = 1000
        controller.quorum.fetch.timeout.ms = 2000
        controller.quorum.request.timeout.ms = 2000
        controller.quorum.retry.backoff.ms = 20
        controller.quorum.voters = [0@kafka-controller-0.kafka-controller-headless.pre-logging.svc.cluster.local:9093, 1@kafka-controller-1.kafka-controller-headless.pre-logging.svc.cluster.local:9093, 2@kafka-controller-2.kafka-controller-headless.pre-logging.svc.cluster.local:9093]
        controller.quota.window.num = 11
        controller.quota.window.size.seconds = 1
        controller.socket.timeout.ms = 30000
        create.topic.policy.class.name = null
        default.replication.factor = 1
        delegation.token.expiry.check.interval.ms = 3600000
        delegation.token.expiry.time.ms = 86400000
        delegation.token.master.key = null
        delegation.token.max.lifetime.ms = 604800000
        delegation.token.secret.key = null
        delete.records.purgatory.purge.interval.requests = 1
        delete.topic.enable = true
        early.start.listeners = null
        eligible.leader.replicas.enable = false
        fetch.max.bytes = 57671680
        fetch.purgatory.purge.interval.requests = 1000
        group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.UniformAssignor, org.apache.kafka.coordinator.group.assignor.RangeAssignor]
        group.consumer.heartbeat.interval.ms = 5000
        group.consumer.max.heartbeat.interval.ms = 15000
        group.consumer.max.session.timeout.ms = 60000
        group.consumer.max.size = 2147483647
        group.consumer.min.heartbeat.interval.ms = 5000
        group.consumer.min.session.timeout.ms = 45000
        group.consumer.session.timeout.ms = 45000
        group.coordinator.new.enable = false
        group.coordinator.rebalance.protocols = [classic]
        group.coordinator.threads = 1
        group.initial.rebalance.delay.ms = 3000
        group.max.session.timeout.ms = 1800000
        group.max.size = 2147483647
        group.min.session.timeout.ms = 6000
        initial.broker.registration.timeout.ms = 60000
        inter.broker.listener.name = INTERNAL
        inter.broker.protocol.version = 3.7-IV4
        kafka.metrics.polling.interval.secs = 10
        kafka.metrics.reporters = []
        leader.imbalance.check.interval.seconds = 300
        leader.imbalance.per.broker.percentage = 10
        listener.security.protocol.map = CLIENT:SASL_PLAINTEXT,INTERNAL:SASL_PLAINTEXT,CONTROLLER:SASL_PLAINTEXT
        listeners = CLIENT://:9092,INTERNAL://:9094,CONTROLLER://:9093
        log.cleaner.backoff.ms = 15000
        log.cleaner.dedupe.buffer.size = 134217728
        log.cleaner.delete.retention.ms = 86400000
        log.cleaner.enable = true
        log.cleaner.io.buffer.load.factor = 0.9
        log.cleaner.io.buffer.size = 524288
        log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
        log.cleaner.max.compaction.lag.ms = 9223372036854775807
        log.cleaner.min.cleanable.ratio = 0.5
        log.cleaner.min.compaction.lag.ms = 0
        log.cleaner.threads = 1
        log.cleanup.policy = [delete]
        log.dir = /bitnami/kafka/data
        log.dirs = null
        log.flush.interval.messages = 9223372036854775807
        log.flush.interval.ms = null
        log.flush.offset.checkpoint.interval.ms = 60000
        log.flush.scheduler.interval.ms = 9223372036854775807
        log.flush.start.offset.checkpoint.interval.ms = 60000
        log.index.interval.bytes = 4096
        log.index.size.max.bytes = 10485760
        log.local.retention.bytes = -2
        log.local.retention.ms = -2
        log.message.downconversion.enable = true
        log.message.format.version = 3.0-IV1
        log.message.timestamp.after.max.ms = 9223372036854775807
        log.message.timestamp.before.max.ms = 9223372036854775807
        log.message.timestamp.difference.max.ms = 9223372036854775807
        log.message.timestamp.type = CreateTime
        log.preallocate = false
        log.retention.bytes = -1
        log.retention.check.interval.ms = 300000
        log.retention.hours = 168
        log.retention.minutes = null
        log.retention.ms = null
        log.roll.hours = 168
        log.roll.jitter.hours = 0
        log.roll.jitter.ms = null
        log.roll.ms = null
        log.segment.bytes = 1073741824
        log.segment.delete.delay.ms = 60000
        max.connection.creation.rate = 2147483647
        max.connections = 2147483647
        max.connections.per.ip = 2147483647
        max.connections.per.ip.overrides = 
        max.incremental.fetch.session.cache.slots = 1000
        message.max.bytes = 1048588
        metadata.log.dir = null
        metadata.log.max.record.bytes.between.snapshots = 20971520
        metadata.log.max.snapshot.interval.ms = 3600000
        metadata.log.segment.bytes = 1073741824
        metadata.log.segment.min.bytes = 8388608
        metadata.log.segment.ms = 604800000
        metadata.max.idle.interval.ms = 500
        metadata.max.retention.bytes = 104857600
        metadata.max.retention.ms = 604800000
        metric.reporters = []
        metrics.num.samples = 2
        metrics.recording.level = INFO
        metrics.sample.window.ms = 30000
        min.insync.replicas = 1
        node.id = 0
        num.io.threads = 8
        num.network.threads = 3
        num.partitions = 1
        num.recovery.threads.per.data.dir = 1
        num.replica.alter.log.dirs.threads = null
        num.replica.fetchers = 1
        offset.metadata.max.bytes = 4096
        offsets.commit.required.acks = -1
        offsets.commit.timeout.ms = 5000
        offsets.load.buffer.size = 5242880
        offsets.retention.check.interval.ms = 600000
        offsets.retention.minutes = 10080
        offsets.topic.compression.codec = 0
        offsets.topic.num.partitions = 50
        offsets.topic.replication.factor = 3
        offsets.topic.segment.bytes = 104857600
        password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
        password.encoder.iterations = 4096
        password.encoder.key.length = 128
        password.encoder.keyfactory.algorithm = null
        password.encoder.old.secret = null
        password.encoder.secret = null
        principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder
        process.roles = [controller, broker]
        producer.id.expiration.check.interval.ms = 600000
        producer.id.expiration.ms = 86400000
        producer.purgatory.purge.interval.requests = 1000
        queued.max.request.bytes = -1
        queued.max.requests = 500
        quota.window.num = 11
        quota.window.size.seconds = 1
        remote.log.index.file.cache.total.size.bytes = 1073741824
        remote.log.manager.task.interval.ms = 30000
        remote.log.manager.task.retry.backoff.max.ms = 30000
        remote.log.manager.task.retry.backoff.ms = 500
        remote.log.manager.task.retry.jitter = 0.2
        remote.log.manager.thread.pool.size = 10
        remote.log.metadata.custom.metadata.max.bytes = 128
        remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager
        remote.log.metadata.manager.class.path = null
        remote.log.metadata.manager.impl.prefix = rlmm.config.
        remote.log.metadata.manager.listener.name = null
        remote.log.reader.max.pending.tasks = 100
        remote.log.reader.threads = 10
        remote.log.storage.manager.class.name = null
        remote.log.storage.manager.class.path = null
        remote.log.storage.manager.impl.prefix = rsm.config.
        remote.log.storage.system.enable = false
        replica.fetch.backoff.ms = 1000
        replica.fetch.max.bytes = 1048576
        replica.fetch.min.bytes = 1
        replica.fetch.response.max.bytes = 10485760
        replica.fetch.wait.max.ms = 500
        replica.high.watermark.checkpoint.interval.ms = 5000
        replica.lag.time.max.ms = 30000
        replica.selector.class = null
        replica.socket.receive.buffer.bytes = 65536
        replica.socket.timeout.ms = 30000
        replication.quota.window.num = 11
        replication.quota.window.size.seconds = 1
        request.timeout.ms = 30000
        reserved.broker.max.id = 1000
        sasl.client.callback.handler.class = null
        sasl.enabled.mechanisms = [PLAIN, SCRAM-SHA-256, SCRAM-SHA-512]
        sasl.jaas.config = null
        sasl.kerberos.kinit.cmd = /usr/bin/kinit
        sasl.kerberos.min.time.before.relogin = 60000
        sasl.kerberos.principal.to.local.rules = [DEFAULT]
        sasl.kerberos.service.name = null
        sasl.kerberos.ticket.renew.jitter = 0.05
        sasl.kerberos.ticket.renew.window.factor = 0.8
        sasl.login.callback.handler.class = null
        sasl.login.class = null
        sasl.login.connect.timeout.ms = null
        sasl.login.read.timeout.ms = null
        sasl.login.refresh.buffer.seconds = 300
        sasl.login.refresh.min.period.seconds = 60
        sasl.login.refresh.window.factor = 0.8
        sasl.login.refresh.window.jitter = 0.05
        sasl.login.retry.backoff.max.ms = 10000
        sasl.login.retry.backoff.ms = 100
        sasl.mechanism.controller.protocol = PLAIN
        sasl.mechanism.inter.broker.protocol = PLAIN
        sasl.oauthbearer.clock.skew.seconds = 30
        sasl.oauthbearer.expected.audience = null
        sasl.oauthbearer.expected.issuer = null
        sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
        sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
        sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
        sasl.oauthbearer.jwks.endpoint.url = null
        sasl.oauthbearer.scope.claim.name = scope
        sasl.oauthbearer.sub.claim.name = sub
        sasl.oauthbearer.token.endpoint.url = null
        sasl.server.callback.handler.class = null
        sasl.server.max.receive.size = 524288
        security.inter.broker.protocol = PLAINTEXT
        security.providers = null
        server.max.startup.time.ms = 9223372036854775807
        socket.connection.setup.timeout.max.ms = 30000
        socket.connection.setup.timeout.ms = 10000
        socket.listen.backlog.size = 50
        socket.receive.buffer.bytes = 102400
        socket.request.max.bytes = 104857600
        socket.send.buffer.bytes = 102400
        ssl.allow.dn.changes = false
        ssl.allow.san.changes = false
        ssl.cipher.suites = []
        ssl.client.auth = none
        ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
        ssl.endpoint.identification.algorithm = https
        ssl.engine.factory.class = null
        ssl.key.password = null
        ssl.keymanager.algorithm = SunX509
        ssl.keystore.certificate.chain = null
        ssl.keystore.key = null
        ssl.keystore.location = null
        ssl.keystore.password = null
        ssl.keystore.type = JKS
        ssl.principal.mapping.rules = DEFAULT
        ssl.protocol = TLSv1.3
        ssl.provider = null
        ssl.secure.random.implementation = null
        ssl.trustmanager.algorithm = PKIX
        ssl.truststore.certificates = null
        ssl.truststore.location = null
        ssl.truststore.password = null
        ssl.truststore.type = JKS
        telemetry.max.bytes = 1048576
        transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000
        transaction.max.timeout.ms = 900000
        transaction.partition.verification.enable = true
        transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
        transaction.state.log.load.buffer.size = 5242880
        transaction.state.log.min.isr = 2
        transaction.state.log.num.partitions = 50
        transaction.state.log.replication.factor = 3
        transaction.state.log.segment.bytes = 104857600
        transactional.id.expiration.ms = 604800000
        unclean.leader.election.enable = false
        unstable.api.versions.enable = false
        unstable.metadata.versions.enable = false
        zookeeper.clientCnxnSocket = null
        zookeeper.connect = null
        zookeeper.connection.timeout.ms = null
        zookeeper.max.in.flight.requests = 10
        zookeeper.metadata.migration.enable = false
        zookeeper.metadata.migration.min.batch.size = 200
        zookeeper.session.timeout.ms = 18000
        zookeeper.set.acl = false
        zookeeper.ssl.cipher.suites = null
        zookeeper.ssl.client.enable = false
        zookeeper.ssl.crl.enable = false
        zookeeper.ssl.enabled.protocols = null
        zookeeper.ssl.endpoint.identification.algorithm = HTTPS
        zookeeper.ssl.keystore.location = null
        zookeeper.ssl.keystore.password = null
        zookeeper.ssl.keystore.type = null
        zookeeper.ssl.ocsp.enable = false
        zookeeper.ssl.protocol = TLSv1.2
        zookeeper.ssl.truststore.location = null
        zookeeper.ssl.truststore.password = null
        zookeeper.ssl.truststore.type = null
 (kafka.server.KafkaConfig)
[2024-04-29 11:01:34,782] INFO [BrokerLifecycleManager id=0] The broker is in RECOVERY. (kafka.server.BrokerLifecycleManager)
[2024-04-29 11:01:34,783] INFO [QuorumController id=0] The request from broker 1 to unfence has been granted because it has caught up with the offset of its register broker record 17. (org.apache.kafka.controller.BrokerHeartbeatManager)
[2024-04-29 11:01:34,788] INFO [BrokerServer id=0] Waiting for the broker to be unfenced (kafka.server.BrokerServer)
[2024-04-29 11:01:34,796] INFO [QuorumController id=0] Replayed BrokerRegistrationChangeRecord modifying the registration for broker 1: BrokerRegistrationChangeRecord(brokerId=1, brokerEpoch=17, fenced=-1, inControlledShutdown=0, logDirs=[]) (org.apache.kafka.controller.ClusterControlManager)
[2024-04-29 11:01:34,796] INFO [QuorumController id=0] The request from broker 0 to unfence has been granted because it has caught up with the offset of its register broker record 20. (org.apache.kafka.controller.BrokerHeartbeatManager)
[2024-04-29 11:01:34,797] INFO [QuorumController id=0] Replayed BrokerRegistrationChangeRecord modifying the registration for broker 0: BrokerRegistrationChangeRecord(brokerId=0, brokerEpoch=20, fenced=-1, inControlledShutdown=0, logDirs=[]) (org.apache.kafka.controller.ClusterControlManager)
[2024-04-29 11:01:34,918] INFO [BrokerLifecycleManager id=0] The broker has been unfenced. Transitioning from RECOVERY to RUNNING. (kafka.server.BrokerLifecycleManager)
[2024-04-29 11:01:34,981] INFO [BrokerServer id=0] Finished waiting for the broker to be unfenced (kafka.server.BrokerServer)
[2024-04-29 11:01:34,981] INFO [QuorumController id=0] The request from broker 2 to unfence has been granted because it has caught up with the offset of its register broker record 14. (org.apache.kafka.controller.BrokerHeartbeatManager)
[2024-04-29 11:01:34,981] INFO [QuorumController id=0] Replayed BrokerRegistrationChangeRecord modifying the registration for broker 2: BrokerRegistrationChangeRecord(brokerId=2, brokerEpoch=14, fenced=-1, inControlledShutdown=0, logDirs=[]) (org.apache.kafka.controller.ClusterControlManager)
[2024-04-29 11:01:34,982] INFO authorizerStart completed for endpoint INTERNAL. Endpoint is now READY. (org.apache.kafka.server.network.EndpointReadyFutures)
[2024-04-29 11:01:34,982] INFO authorizerStart completed for endpoint CLIENT. Endpoint is now READY. (org.apache.kafka.server.network.EndpointReadyFutures)
[2024-04-29 11:01:34,982] INFO [SocketServer listenerType=BROKER, nodeId=0] Enabling request processing. (kafka.network.SocketServer)
[2024-04-29 11:01:34,983] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor)
[2024-04-29 11:01:34,984] INFO Awaiting socket connections on 0.0.0.0:9094. (kafka.network.DataPlaneAcceptor)
[2024-04-29 11:01:34,987] INFO [BrokerServer id=0] Waiting for all of the authorizer futures to be completed (kafka.server.BrokerServer)
[2024-04-29 11:01:34,987] INFO [BrokerServer id=0] Finished waiting for all of the authorizer futures to be completed (kafka.server.BrokerServer)
[2024-04-29 11:01:34,987] INFO [BrokerServer id=0] Waiting for all of the SocketServer Acceptors to be started (kafka.server.BrokerServer)
[2024-04-29 11:01:34,987] INFO [BrokerServer id=0] Finished waiting for all of the SocketServer Acceptors to be started (kafka.server.BrokerServer)
[2024-04-29 11:01:34,987] INFO [BrokerServer id=0] Transition from STARTING to STARTED (kafka.server.BrokerServer)
[2024-04-29 11:01:34,988] INFO Kafka version: 3.7.0 (org.apache.kafka.common.utils.AppInfoParser)
[2024-04-29 11:01:34,988] INFO Kafka commitId: 2ae524ed625438c5 (org.apache.kafka.common.utils.AppInfoParser)
[2024-04-29 11:01:34,988] INFO Kafka startTimeMs: 1714388494987 (org.apache.kafka.common.utils.AppInfoParser)
[2024-04-29 11:01:34,990] INFO [KafkaRaftServer nodeId=0] Kafka Server started (kafka.server.KafkaRaftServer)

And I not getting the Failed authentication .... message.

@zjf88865
Copy link
Author

zjf88865 commented May 4, 2024

Hi, This is what I used:

$ helm version
version.BuildInfo{Version:"v3.14.0", GitCommit:"3fc9f4b2638e76f26739cd77c7017139be81d0ea", GitTreeState:"clean", GoVersion:"go1.21.5"}
$ kubectl version --short
Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
Client Version: v1.27.1
Kustomize Version: v5.0.1
Server Version: v1.28.3
$ helm repo list
NAME            URL                                                                          
bitnami         https://charts.bitnami.com/bitnami                                           
$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "bitnami" chart repository
Update Complete. ⎈Happy Helming!⎈
$ helm search repo bitnami/kafka
NAME            CHART VERSION   APP VERSION     DESCRIPTION
bitnami/kafka   28.0.4          3.7.0           Apache Kafka is a distributed streaming platfor...
helm install kafka bitnami/kafka --namespace pre-logging \
    --set kafkaVersion=3.7.0 \
    --set replicaCount=3 \
    --set kafka.persistence.enabled=false \
    --set kafka.kafkaConfigOverrides=transaction.state.log.replication.factor=3 \
    --set kafka.kafkaConfigOverrides=transaction.state.log.min.isr=2 \
    --set kafka.kafkaConfigOverrides=default.replication.factor=3 \
    --set kafka.kafkaConfigOverrides=num.io.threads=8 \
    --set kafka.kafkaConfigOverrides=num.network.threads=3 \
    --set kafka.kafkaConfigOverrides=log.message.format.version=3.7 \
    --set kafka.kafkaConfigOverrides=inter.broker.protocol.version=3.7 \
    --set kafka.kafkaConfigOverrides=offsets.topic.replication.factor=3 \
    --set kafka.kafkaConfigOverrides=transaction.state.log.num.partitions=50 \
    --set-string labels.app.kubernetes.io/managed-by=Helm \
    --set-string labels.meta.helm.sh/release-name=kafka \
    --set-string labels.meta.helm.sh/release-namespace=pre-logging \
    --set broker.persistence.enabled=false \
    --set controller.persistence.enabled=false \
    --set broker.logPersistence.enabled=false \
    --set kraft.enabled=true \
    --set listeners.client.protocol=SASL_PLAINTEXT \
    --set listeners.controller.protocol=SASL_PLAINTEXT \
    --set listeners.interbroker.protocol=SASL_PLAINTEXT \
    --set listeners.external.protocol=SASL_PLAINTEXT \
    --set sasl.interbroker.user=inter_broker_user \
    --set sasl.interbroker.password=as123456
kubectl get pods -n pre-logging
NAME                 READY   STATUS    RESTARTS   AGE
kafka-controller-0   1/1     Running   0          57s
kafka-controller-1   1/1     Running   0          57s
kafka-controller-2   1/1     Running   0          57s
kubectl logs -n pre-logging kafka-controller-0 
Defaulted container "kafka" out of: kafka, kafka-init (init)
kafka 11:01:12.51 INFO  ==> 
kafka 11:01:12.51 INFO  ==> Welcome to the Bitnami kafka container
kafka 11:01:12.51 INFO  ==> Subscribe to project updates by watching https://github.com/bitnami/containers
kafka 11:01:12.52 INFO  ==> Submit issues and feature requests at https://github.com/bitnami/containers/issues
kafka 11:01:12.52 INFO  ==> Upgrade to Tanzu Application Catalog for production environments to access custom-configured and pre-packaged software components. Gain enhanced features, including Software Bill of Materials (SBOM), CVE scan result reports, and VEX documents. To learn more, visit https://bitnami.com/enterprise
kafka 11:01:12.52 INFO  ==> 
kafka 11:01:12.53 INFO  ==> ** Starting Kafka setup **
kafka 11:01:12.73 INFO  ==> Initializing KRaft storage metadata
kafka 11:01:12.73 INFO  ==> Adding KRaft SCRAM users at storage bootstrap
kafka 11:01:12.79 INFO  ==> Formatting storage directories to add metadata...
metaPropertiesEnsemble=MetaPropertiesEnsemble(metadataLogDir=Optional.empty, dirs={/bitnami/kafka/data: EMPTY})
Formatting /bitnami/kafka/data with metadata.version 3.7-IV4.
kafka 11:01:19.82 INFO  ==> ** Kafka setup finished! **

kafka 11:01:19.85 INFO  ==> ** Starting Kafka **
[2024-04-29 11:01:22,997] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2024-04-29 11:01:24,400] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util)
[2024-04-29 11:01:25,391] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler)
[2024-04-29 11:01:25,397] INFO [ControllerServer id=0] Starting controller (kafka.server.ControllerServer)
[2024-04-29 11:01:27,410] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas)
[2024-04-29 11:01:27,786] INFO Successfully logged in. (org.apache.kafka.common.security.authenticator.AbstractLogin)
[2024-04-29 11:01:27,883] INFO [SocketServer listenerType=CONTROLLER, nodeId=0] Created data-plane acceptor and processors for endpoint : ListenerName(CONTROLLER) (kafka.network.SocketServer)
[2024-04-29 11:01:27,889] INFO CONTROLLER: resolved wildcard host to kafka-controller-0.kafka-controller-headless.pre-logging.svc.cluster.local (org.apache.kafka.metadata.ListenerInfo)
[2024-04-29 11:01:27,901] INFO authorizerStart completed for endpoint CONTROLLER. Endpoint is now READY. (org.apache.kafka.server.network.EndpointReadyFutures)
[2024-04-29 11:01:27,902] INFO [SharedServer id=0] Starting SharedServer (kafka.server.SharedServer)
[2024-04-29 11:01:28,235] INFO [LogLoader partition=__cluster_metadata-0, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$)
[2024-04-29 11:01:28,236] INFO [LogLoader partition=__cluster_metadata-0, dir=/bitnami/kafka/data] Reloading from producer snapshot and rebuilding producer state from offset 0 (kafka.log.UnifiedLog$)
[2024-04-29 11:01:28,236] INFO [LogLoader partition=__cluster_metadata-0, dir=/bitnami/kafka/data] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 0 (kafka.log.UnifiedLog$)
[2024-04-29 11:01:28,416] INFO Initialized snapshots with IDs Set() from /bitnami/kafka/data/__cluster_metadata-0 (kafka.raft.KafkaMetadataLog$)
[2024-04-29 11:01:28,507] INFO [raft-expiration-reaper]: Starting (kafka.raft.TimingWheelExpirationService$ExpiredOperationReaper)
[2024-04-29 11:01:29,530] INFO [RaftManager id=0] Completed transition to Unattached(epoch=0, voters=[0, 1, 2], electionTimeoutMs=1296) from null (org.apache.kafka.raft.QuorumState)
[2024-04-29 11:01:29,586] INFO [kafka-0-raft-outbound-request-thread]: Starting (org.apache.kafka.raft.KafkaNetworkChannel$SendThread)
[2024-04-29 11:01:29,586] INFO [kafka-0-raft-io-thread]: Starting (kafka.raft.KafkaRaftManager$RaftIoThread)
[2024-04-29 11:01:29,612] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:29,614] INFO [ControllerServer id=0] Waiting for controller quorum voters future (kafka.server.ControllerServer)
[2024-04-29 11:01:29,615] INFO [ControllerServer id=0] Finished waiting for controller quorum voters future (kafka.server.ControllerServer)
[2024-04-29 11:01:29,615] INFO [RaftManager id=0] Registered the listener org.apache.kafka.image.loader.MetadataLoader@1906519282 (org.apache.kafka.raft.KafkaRaftClient)
[2024-04-29 11:01:29,712] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:29,805] INFO [QuorumController id=0] Creating new QuorumController with clusterId tGotg5jFmyb2BaeIMMqDmh. (org.apache.kafka.controller.QuorumController)
[2024-04-29 11:01:29,806] INFO [RaftManager id=0] Registered the listener org.apache.kafka.controller.QuorumController$QuorumMetaLogListener@447991081 (org.apache.kafka.raft.KafkaRaftClient)
[2024-04-29 11:01:29,813] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:29,816] INFO [controller-0-ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-04-29 11:01:29,817] INFO [controller-0-ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-04-29 11:01:29,818] INFO [controller-0-ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-04-29 11:01:29,883] INFO [controller-0-ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-04-29 11:01:29,913] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:29,923] INFO [ExpirationReaper-0-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-04-29 11:01:29,933] INFO [RaftManager id=0] Completed transition to CandidateState(localId=0, epoch=1, retries=1, voteStates={0=GRANTED, 1=UNRECORDED, 2=UNRECORDED}, highWatermark=Optional.empty, electionTimeoutMs=1371) from Unattached(epoch=0, voters=[0, 1, 2], electionTimeoutMs=1296) (org.apache.kafka.raft.QuorumState)
[2024-04-29 11:01:29,936] INFO [QuorumController id=0] In the new epoch 1, the leader is (none). (org.apache.kafka.controller.QuorumController)
[2024-04-29 11:01:30,081] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:30,086] INFO [ControllerServer id=0] Waiting for the controller metadata publishers to be installed (kafka.server.ControllerServer)
[2024-04-29 11:01:30,086] INFO [ControllerServer id=0] Finished waiting for the controller metadata publishers to be installed (kafka.server.ControllerServer)
[2024-04-29 11:01:30,086] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:30,087] INFO [SocketServer listenerType=CONTROLLER, nodeId=0] Enabling request processing. (kafka.network.SocketServer)
[2024-04-29 11:01:30,093] INFO Awaiting socket connections on 0.0.0.0:9093. (kafka.network.DataPlaneAcceptor)
[2024-04-29 11:01:30,187] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:30,287] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:30,291] INFO [controller-0-to-controller-registration-channel-manager]: Starting (kafka.server.NodeToControllerRequestThread)
[2024-04-29 11:01:30,292] INFO [ControllerServer id=0] Waiting for all of the authorizer futures to be completed (kafka.server.ControllerServer)
[2024-04-29 11:01:30,293] INFO [ControllerRegistrationManager id=0 incarnation=FsUDjHGWSlqvpAsMsesrtA] initialized channel manager. (kafka.server.ControllerRegistrationManager)
[2024-04-29 11:01:30,293] INFO [ControllerServer id=0] Finished waiting for all of the authorizer futures to be completed (kafka.server.ControllerServer)
[2024-04-29 11:01:30,293] INFO [ControllerServer id=0] Waiting for all of the SocketServer Acceptors to be started (kafka.server.ControllerServer)
[2024-04-29 11:01:30,293] INFO [ControllerServer id=0] Finished waiting for all of the SocketServer Acceptors to be started (kafka.server.ControllerServer)
[2024-04-29 11:01:30,294] INFO [ControllerRegistrationManager id=0 incarnation=FsUDjHGWSlqvpAsMsesrtA] maybeSendControllerRegistration: cannot register yet because the metadata version is still 3.0-IV1, which does not support KIP-919 controller registration. (kafka.server.ControllerRegistrationManager)
[2024-04-29 11:01:30,295] INFO [BrokerServer id=0] Transition from SHUTDOWN to STARTING (kafka.server.BrokerServer)
[2024-04-29 11:01:30,297] INFO [BrokerServer id=0] Starting broker (kafka.server.BrokerServer)
[2024-04-29 11:01:30,388] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:30,390] INFO [broker-0-ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-04-29 11:01:30,391] INFO [broker-0-ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-04-29 11:01:30,391] INFO [broker-0-ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-04-29 11:01:30,392] INFO [broker-0-ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-04-29 11:01:30,488] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:30,589] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:30,591] INFO [BrokerServer id=0] Waiting for controller quorum voters future (kafka.server.BrokerServer)
[2024-04-29 11:01:30,591] INFO [BrokerServer id=0] Finished waiting for controller quorum voters future (kafka.server.BrokerServer)
[2024-04-29 11:01:30,595] INFO [broker-0-to-controller-forwarding-channel-manager]: Starting (kafka.server.NodeToControllerRequestThread)
[2024-04-29 11:01:30,689] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:30,790] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:30,890] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:30,991] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:31,095] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:31,213] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:31,280] INFO [RaftManager id=0] Election has timed out, backing off for 0ms before becoming a candidate again (org.apache.kafka.raft.KafkaRaftClient)
[2024-04-29 11:01:31,280] INFO [RaftManager id=0] Re-elect as candidate after election backoff has completed (org.apache.kafka.raft.KafkaRaftClient)
[2024-04-29 11:01:31,286] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas)
[2024-04-29 11:01:31,292] INFO Successfully logged in. (org.apache.kafka.common.security.authenticator.AbstractLogin)
[2024-04-29 11:01:31,295] INFO [RaftManager id=0] Completed transition to CandidateState(localId=0, epoch=2, retries=2, voteStates={0=GRANTED, 1=UNRECORDED, 2=UNRECORDED}, highWatermark=Optional.empty, electionTimeoutMs=1557) from CandidateState(localId=0, epoch=1, retries=1, voteStates={0=GRANTED, 1=REJECTED, 2=UNRECORDED}, highWatermark=Optional.empty, electionTimeoutMs=1371) (org.apache.kafka.raft.QuorumState)
[2024-04-29 11:01:31,295] INFO [QuorumController id=0] In the new epoch 2, the leader is (none). (org.apache.kafka.controller.QuorumController)
[2024-04-29 11:01:31,296] INFO Successfully logged in. (org.apache.kafka.common.security.authenticator.AbstractLogin)
[2024-04-29 11:01:31,381] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:31,382] INFO [SocketServer listenerType=BROKER, nodeId=0] Created data-plane acceptor and processors for endpoint : ListenerName(CLIENT) (kafka.network.SocketServer)
[2024-04-29 11:01:31,384] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas)
[2024-04-29 11:01:31,388] INFO Successfully logged in. (org.apache.kafka.common.security.authenticator.AbstractLogin)
[2024-04-29 11:01:31,388] INFO Successfully logged in. (org.apache.kafka.common.security.authenticator.AbstractLogin)
[2024-04-29 11:01:31,481] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:31,487] INFO [SocketServer listenerType=BROKER, nodeId=0] Created data-plane acceptor and processors for endpoint : ListenerName(INTERNAL) (kafka.network.SocketServer)
[2024-04-29 11:01:31,581] INFO [RaftManager id=0] Completed transition to Leader(localId=0, epoch=2, epochStartOffset=0, highWatermark=Optional.empty, voterStates={0=ReplicaState(nodeId=0, endOffset=Optional.empty, lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=true), 1=ReplicaState(nodeId=1, endOffset=Optional.empty, lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=false), 2=ReplicaState(nodeId=2, endOffset=Optional.empty, lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=false)}) from CandidateState(localId=0, epoch=2, retries=2, voteStates={0=GRANTED, 1=GRANTED, 2=UNRECORDED}, highWatermark=Optional.empty, electionTimeoutMs=1557) (org.apache.kafka.raft.QuorumState)
[2024-04-29 11:01:31,582] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:31,583] INFO [broker-0-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka-controller-0.kafka-controller-headless.pre-logging.svc.cluster.local:9093 (id: 0 rack: null) (kafka.server.NodeToControllerRequestThread)
[2024-04-29 11:01:31,584] INFO [broker-0-to-controller-alter-partition-channel-manager]: Starting (kafka.server.NodeToControllerRequestThread)
[2024-04-29 11:01:31,585] INFO [broker-0-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka-controller-0.kafka-controller-headless.pre-logging.svc.cluster.local:9093 (id: 0 rack: null) (kafka.server.NodeToControllerRequestThread)
[2024-04-29 11:01:31,601] INFO [broker-0-to-controller-directory-assignments-channel-manager]: Starting (kafka.server.NodeToControllerRequestThread)
[2024-04-29 11:01:31,602] INFO [broker-0-to-controller-directory-assignments-channel-manager]: Recorded new controller, from now on will use node kafka-controller-0.kafka-controller-headless.pre-logging.svc.cluster.local:9093 (id: 0 rack: null) (kafka.server.NodeToControllerRequestThread)
[2024-04-29 11:01:31,615] INFO [controller-0-to-controller-registration-channel-manager]: Recorded new controller, from now on will use node kafka-controller-0.kafka-controller-headless.pre-logging.svc.cluster.local:9093 (id: 0 rack: null) (kafka.server.NodeToControllerRequestThread)
[2024-04-29 11:01:31,682] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:31,783] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:31,799] INFO [ExpirationReaper-0-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-04-29 11:01:31,801] INFO [ExpirationReaper-0-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-04-29 11:01:31,881] INFO [ExpirationReaper-0-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-04-29 11:01:31,883] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:31,883] INFO [ExpirationReaper-0-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-04-29 11:01:31,886] INFO [ExpirationReaper-0-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-04-29 11:01:31,984] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:32,001] INFO [ExpirationReaper-0-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-04-29 11:01:32,002] INFO [ExpirationReaper-0-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-04-29 11:01:32,084] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:32,185] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:32,285] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:32,295] INFO [QuorumController id=0] registerBroker: event failed with NotControllerException in 657 microseconds. (org.apache.kafka.controller.QuorumController)
[2024-04-29 11:01:32,386] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:32,391] INFO Unable to read the broker epoch in /bitnami/kafka/data. (kafka.log.LogManager)
[2024-04-29 11:01:32,393] INFO [broker-0-to-controller-heartbeat-channel-manager]: Starting (kafka.server.NodeToControllerRequestThread)
[2024-04-29 11:01:32,394] INFO [broker-0-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-controller-0.kafka-controller-headless.pre-logging.svc.cluster.local:9093 (id: 0 rack: null) (kafka.server.NodeToControllerRequestThread)
[2024-04-29 11:01:32,399] INFO [QuorumController id=0] registerBroker: event failed with NotControllerException in 117 microseconds. (org.apache.kafka.controller.QuorumController)
[2024-04-29 11:01:32,401] INFO [BrokerLifecycleManager id=0] Incarnation IiE1Pz-wSVKvIDICKRcQsQ of broker 0 in cluster tGotg5jFmyb2BaeIMMqDmh is now STARTING. (kafka.server.BrokerLifecycleManager)
[2024-04-29 11:01:32,486] INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:32,492] INFO [QuorumController id=0] registerBroker: event failed with NotControllerException in 102 microseconds. (org.apache.kafka.controller.QuorumController)
[2024-04-29 11:01:32,494] INFO [QuorumController id=0] registerBroker: event failed with NotControllerException in 81 microseconds. (org.apache.kafka.controller.QuorumController)
[2024-04-29 11:01:32,496] INFO [NodeToControllerChannelManager id=0 name=heartbeat] Client requested disconnect from node 0 (org.apache.kafka.clients.NetworkClient)
[2024-04-29 11:01:32,581] INFO [broker-0-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-controller-0.kafka-controller-headless.pre-logging.svc.cluster.local:9093 (id: 0 rack: null) (kafka.server.NodeToControllerRequestThread)
[2024-04-29 11:01:32,585] INFO [RaftManager id=0] High watermark set to LogOffsetMetadata(offset=1, metadata=Optional[(segmentBaseOffset=0,relativePositionInSegment=106)]) for the first time for epoch 2 based on indexOfHw 1 and voters [ReplicaState(nodeId=0, endOffset=Optional[LogOffsetMetadata(offset=1, metadata=Optional[(segmentBaseOffset=0,relativePositionInSegment=106)])], lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=true), ReplicaState(nodeId=1, endOffset=Optional[LogOffsetMetadata(offset=1, metadata=Optional[(segmentBaseOffset=0,relativePositionInSegment=106)])], lastFetchTimestamp=1714388492582, lastCaughtUpTimestamp=1714388492582, hasAcknowledgedLeader=true), ReplicaState(nodeId=2, endOffset=Optional[LogOffsetMetadata(offset=0, metadata=Optional[(segmentBaseOffset=0,relativePositionInSegment=0)])], lastFetchTimestamp=1714388492181, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=true)] (org.apache.kafka.raft.LeaderState)
[2024-04-29 11:01:32,705] INFO [MetadataLoader id=0] initializeNewPublishers: The loader is still catching up because we have loaded up to offset -1, but the high water mark is 1 (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:32,785] INFO [QuorumController id=0] registerBroker: event failed with NotControllerException in 126 microseconds. (org.apache.kafka.controller.QuorumController)
[2024-04-29 11:01:32,786] INFO [QuorumController id=0] registerBroker: event failed with NotControllerException in 100 microseconds. (org.apache.kafka.controller.QuorumController)
[2024-04-29 11:01:32,881] INFO [MetadataLoader id=0] initializeNewPublishers: The loader is still catching up because we have loaded up to offset -1, but the high water mark is 1 (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:32,881] INFO [QuorumController id=0] registerBroker: event failed with NotControllerException in 153 microseconds. (org.apache.kafka.controller.QuorumController)
[2024-04-29 11:01:32,881] INFO [ExpirationReaper-0-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-04-29 11:01:32,882] INFO [NodeToControllerChannelManager id=0 name=heartbeat] Client requested disconnect from node 0 (org.apache.kafka.clients.NetworkClient)
[2024-04-29 11:01:32,882] INFO [broker-0-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-controller-0.kafka-controller-headless.pre-logging.svc.cluster.local:9093 (id: 0 rack: null) (kafka.server.NodeToControllerRequestThread)
[2024-04-29 11:01:32,887] INFO [QuorumController id=0] Becoming the active controller at epoch 2, next write offset 1. (org.apache.kafka.controller.QuorumController)
[2024-04-29 11:01:32,888] INFO [MetadataLoader id=0] maybePublishMetadata(LOG_DELTA): The loader is still catching up because we have not loaded a controller record as of offset 0 and high water mark is 1 (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:32,981] INFO [broker-0-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-controller-0.kafka-controller-headless.pre-logging.svc.cluster.local:9093 (id: 0 rack: null) (kafka.server.NodeToControllerRequestThread)
[2024-04-29 11:01:32,981] INFO [MetadataLoader id=0] initializeNewPublishers: The loader finished catching up to the current high water mark of 1 (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:32,992] WARN [QuorumController id=0] Performing controller activation. The metadata log appears to be empty. Appending 3 bootstrap record(s) in metadata transaction at metadata.version 3.7-IV4 from bootstrap source 'the binary bootstrap metadata file: /bitnami/kafka/data/bootstrap.checkpoint'. Setting the ZK migration state to NONE since this is a de-novo KRaft cluster. (org.apache.kafka.controller.QuorumController)
[2024-04-29 11:01:32,996] INFO [QuorumController id=0] Replayed BeginTransactionRecord(name='Bootstrap records') at offset 1. (org.apache.kafka.controller.OffsetControlManager)
[2024-04-29 11:01:33,081] INFO [QuorumController id=0] Replayed a FeatureLevelRecord setting metadata version to 3.7-IV4 (org.apache.kafka.controller.FeatureControlManager)
[2024-04-29 11:01:33,085] INFO [QuorumController id=0] Replayed UserScramCredentialRecord creating new entry for user1 with mechanism SCRAM_SHA_256. (org.apache.kafka.controller.ScramControlManager)
[2024-04-29 11:01:33,085] INFO [QuorumController id=0] Replayed UserScramCredentialRecord creating new entry for user1 with mechanism SCRAM_SHA_512. (org.apache.kafka.controller.ScramControlManager)
[2024-04-29 11:01:33,086] INFO [QuorumController id=0] Replayed EndTransactionRecord() at offset 6. (org.apache.kafka.controller.OffsetControlManager)
[2024-04-29 11:01:33,185] INFO [MetadataLoader id=0] InitializeNewPublishers: initializing SnapshotGenerator with a snapshot at offset 0 (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:33,186] INFO [MetadataLoader id=0] InitializeNewPublishers: initializing KRaftMetadataCachePublisher with a snapshot at offset 0 (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:33,186] INFO [MetadataLoader id=0] InitializeNewPublishers: initializing FeaturesPublisher with a snapshot at offset 0 (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:33,186] INFO [ControllerServer id=0] Loaded new metadata Features(version=3.0-IV1, finalizedFeatures={metadata.version=1}, finalizedFeaturesEpoch=0). (org.apache.kafka.metadata.publisher.FeaturesPublisher)
[2024-04-29 11:01:33,186] INFO [MetadataLoader id=0] InitializeNewPublishers: initializing ControllerRegistrationsPublisher with a snapshot at offset 0 (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:33,186] INFO [MetadataLoader id=0] InitializeNewPublishers: initializing ControllerRegistrationManager with a snapshot at offset 0 (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:33,187] INFO [MetadataLoader id=0] InitializeNewPublishers: initializing DynamicConfigPublisher controller id=0 with a snapshot at offset 0 (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:33,187] INFO [ControllerRegistrationManager id=0 incarnation=FsUDjHGWSlqvpAsMsesrtA] maybeSendControllerRegistration: cannot register yet because the metadata version is still 3.0-IV1, which does not support KIP-919 controller registration. (kafka.server.ControllerRegistrationManager)
[2024-04-29 11:01:33,189] INFO [MetadataLoader id=0] InitializeNewPublishers: initializing DynamicClientQuotaPublisher controller id=0 with a snapshot at offset 0 (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:33,190] INFO [BrokerServer id=0] Waiting for the broker metadata publishers to be installed (kafka.server.BrokerServer)
[2024-04-29 11:01:33,190] INFO [MetadataLoader id=0] InitializeNewPublishers: initializing ScramPublisher controller id=0 with a snapshot at offset 0 (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:33,191] INFO [MetadataLoader id=0] InitializeNewPublishers: initializing DelegationTokenPublisher controller id=0 with a snapshot at offset 0 (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:33,194] INFO [QuorumController id=0] Replayed initial RegisterBrokerRecord for broker 2: RegisterBrokerRecord(brokerId=2, isMigratingZkBroker=false, incarnationId=WIFeiDIzSAaoIg5oOBq6-g, brokerEpoch=7, endPoints=[BrokerEndpoint(name='INTERNAL', host='kafka-controller-2.kafka-controller-headless.pre-logging.svc.cluster.local', port=9094, securityProtocol=2), BrokerEndpoint(name='CLIENT', host='kafka-controller-2.kafka-controller-headless.pre-logging.svc.cluster.local', port=9092, securityProtocol=2)], features=[BrokerFeature(name='metadata.version', minSupportedVersion=1, maxSupportedVersion=19)], rack=null, fenced=true, inControlledShutdown=false, logDirs=[KsIvMjIYpSOqz-7M94ssLg]) (org.apache.kafka.controller.ClusterControlManager)
[2024-04-29 11:01:33,195] INFO [QuorumController id=0] Replayed initial RegisterBrokerRecord for broker 0: RegisterBrokerRecord(brokerId=0, isMigratingZkBroker=false, incarnationId=IiE1Pz-wSVKvIDICKRcQsQ, brokerEpoch=8, endPoints=[BrokerEndpoint(name='INTERNAL', host='kafka-controller-0.kafka-controller-headless.pre-logging.svc.cluster.local', port=9094, securityProtocol=2), BrokerEndpoint(name='CLIENT', host='kafka-controller-0.kafka-controller-headless.pre-logging.svc.cluster.local', port=9092, securityProtocol=2)], features=[BrokerFeature(name='metadata.version', minSupportedVersion=1, maxSupportedVersion=19)], rack=null, fenced=true, inControlledShutdown=false, logDirs=[d9lo71YK9ZzacyhfbVljZQ]) (org.apache.kafka.controller.ClusterControlManager)
[2024-04-29 11:01:33,196] INFO [QuorumController id=0] Replayed initial RegisterBrokerRecord for broker 1: RegisterBrokerRecord(brokerId=1, isMigratingZkBroker=false, incarnationId=i7We8oJuQ5OXmi8K1v5jwA, brokerEpoch=9, endPoints=[BrokerEndpoint(name='INTERNAL', host='kafka-controller-1.kafka-controller-headless.pre-logging.svc.cluster.local', port=9094, securityProtocol=2), BrokerEndpoint(name='CLIENT', host='kafka-controller-1.kafka-controller-headless.pre-logging.svc.cluster.local', port=9092, securityProtocol=2)], features=[BrokerFeature(name='metadata.version', minSupportedVersion=1, maxSupportedVersion=19)], rack=null, fenced=true, inControlledShutdown=false, logDirs=[3nW2dIcb9Z7elmqyAx1_PQ]) (org.apache.kafka.controller.ClusterControlManager)
[2024-04-29 11:01:33,282] INFO [MetadataLoader id=0] InitializeNewPublishers: initializing ControllerMetadataMetricsPublisher with a snapshot at offset 0 (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:33,283] INFO [MetadataLoader id=0] InitializeNewPublishers: initializing AclPublisher controller id=0 with a snapshot at offset 0 (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:33,284] INFO [BrokerServer id=0] Finished waiting for the broker metadata publishers to be installed (kafka.server.BrokerServer)
[2024-04-29 11:01:33,284] INFO [BrokerServer id=0] Waiting for the controller to acknowledge that we are caught up (kafka.server.BrokerServer)
[2024-04-29 11:01:33,284] INFO [MetadataLoader id=0] InitializeNewPublishers: initializing BrokerMetadataPublisher with a snapshot at offset 0 (org.apache.kafka.image.loader.MetadataLoader)
[2024-04-29 11:01:33,285] INFO [BrokerMetadataPublisher id=0] Publishing initial metadata at offset OffsetAndEpoch(offset=0, epoch=2) with metadata.version 3.0-IV1. (kafka.server.metadata.BrokerMetadataPublisher)
[2024-04-29 11:01:33,286] INFO Loading logs from log dirs ArrayBuffer(/bitnami/kafka/data) (kafka.log.LogManager)
[2024-04-29 11:01:33,292] INFO No logs found to be loaded in /bitnami/kafka/data (kafka.log.LogManager)
[2024-04-29 11:01:33,388] INFO [BrokerLifecycleManager id=0] Successfully registered broker 0 with broker epoch 8 (kafka.server.BrokerLifecycleManager)
[2024-04-29 11:01:33,389] INFO Loaded 0 logs in 101ms (kafka.log.LogManager)
[2024-04-29 11:01:33,389] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
[2024-04-29 11:01:33,392] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
[2024-04-29 11:01:33,501] INFO Starting the log cleaner (kafka.log.LogCleaner)
[2024-04-29 11:01:34,095] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread)
[2024-04-29 11:01:34,183] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
[2024-04-29 11:01:34,186] INFO [GroupCoordinator 0]: Starting up. (kafka.coordinator.group.GroupCoordinator)
[2024-04-29 11:01:34,186] INFO [AddPartitionsToTxnSenderThread-0]: Starting (kafka.server.AddPartitionsToTxnManager)
[2024-04-29 11:01:34,190] INFO [GroupCoordinator 0]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
[2024-04-29 11:01:34,191] INFO [TransactionCoordinator id=0] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
[2024-04-29 11:01:34,194] INFO [TxnMarkerSenderThread-0]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2024-04-29 11:01:34,195] INFO [TransactionCoordinator id=0] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
[2024-04-29 11:01:34,288] INFO [QuorumController id=0] Replayed RegisterControllerRecord contaning ControllerRegistration(id=1, incarnationId=kOm2CgsKQVC-euvbsFphJQ, zkMigrationReady=false, listeners=[Endpoint(listenerName='CONTROLLER', securityProtocol=SASL_PLAINTEXT, host='kafka-controller-1.kafka-controller-headless.pre-logging.svc.cluster.local', port=9093)], supportedFeatures={metadata.version: 1-19}). (org.apache.kafka.controller.ClusterControlManager)
[2024-04-29 11:01:34,293] INFO [QuorumController id=0] Replayed RegisterBrokerRecord modifying the registration for broker 1: RegisterBrokerRecord(brokerId=1, isMigratingZkBroker=false, incarnationId=i7We8oJuQ5OXmi8K1v5jwA, brokerEpoch=13, endPoints=[BrokerEndpoint(name='INTERNAL', host='kafka-controller-1.kafka-controller-headless.pre-logging.svc.cluster.local', port=9094, securityProtocol=2), BrokerEndpoint(name='CLIENT', host='kafka-controller-1.kafka-controller-headless.pre-logging.svc.cluster.local', port=9092, securityProtocol=2)], features=[BrokerFeature(name='metadata.version', minSupportedVersion=1, maxSupportedVersion=19)], rack=null, fenced=true, inControlledShutdown=false, logDirs=[3nW2dIcb9Z7elmqyAx1_PQ]) (org.apache.kafka.controller.ClusterControlManager)
[2024-04-29 11:01:34,382] INFO [ControllerServer id=0] Loaded new metadata Features(version=3.7-IV4, finalizedFeatures={metadata.version=19}, finalizedFeaturesEpoch=6). (org.apache.kafka.metadata.publisher.FeaturesPublisher)
[2024-04-29 11:01:34,382] INFO [QuorumController id=0] Replayed RegisterBrokerRecord modifying the registration for broker 2: RegisterBrokerRecord(brokerId=2, isMigratingZkBroker=false, incarnationId=WIFeiDIzSAaoIg5oOBq6-g, brokerEpoch=14, endPoints=[BrokerEndpoint(name='INTERNAL', host='kafka-controller-2.kafka-controller-headless.pre-logging.svc.cluster.local', port=9094, securityProtocol=2), BrokerEndpoint(name='CLIENT', host='kafka-controller-2.kafka-controller-headless.pre-logging.svc.cluster.local', port=9092, securityProtocol=2)], features=[BrokerFeature(name='metadata.version', minSupportedVersion=1, maxSupportedVersion=19)], rack=null, fenced=true, inControlledShutdown=false, logDirs=[KsIvMjIYpSOqz-7M94ssLg]) (org.apache.kafka.controller.ClusterControlManager)
[2024-04-29 11:01:34,385] INFO [ControllerRegistrationManager id=0 incarnation=FsUDjHGWSlqvpAsMsesrtA] sendControllerRegistration: attempting to send ControllerRegistrationRequestData(controllerId=0, incarnationId=FsUDjHGWSlqvpAsMsesrtA, zkMigrationReady=false, listeners=[Listener(name='CONTROLLER', host='kafka-controller-0.kafka-controller-headless.pre-logging.svc.cluster.local', port=9093, securityProtocol=2)], features=[Feature(name='metadata.version', minSupportedVersion=1, maxSupportedVersion=19)]) (kafka.server.ControllerRegistrationManager)
[2024-04-29 11:01:34,385] INFO [BrokerMetadataPublisher id=0] Updating metadata.version to 19 at offset OffsetAndEpoch(offset=6, epoch=2). (kafka.server.metadata.BrokerMetadataPublisher)
[2024-04-29 11:01:34,389] INFO [QuorumController id=0] Replayed RegisterControllerRecord contaning ControllerRegistration(id=2, incarnationId=7Jz23xKrS5yUjMfx7mXyGg, zkMigrationReady=false, listeners=[Endpoint(listenerName='CONTROLLER', securityProtocol=SASL_PLAINTEXT, host='kafka-controller-2.kafka-controller-headless.pre-logging.svc.cluster.local', port=9093)], supportedFeatures={metadata.version: 1-19}). (org.apache.kafka.controller.ClusterControlManager)
[2024-04-29 11:01:34,481] INFO [BrokerMetadataPublisher id=0] Resending BrokerRegistration with existing incarnation-id to inform the controller about log directories in the broker following metadata update: previousMetadataVersion: 3.0-IV1 newMetadataVersion: 3.7-IV4 (kafka.server.metadata.BrokerMetadataPublisher)
[2024-04-29 11:01:34,482] INFO [QuorumController id=0] Replayed RegisterControllerRecord contaning ControllerRegistration(id=0, incarnationId=FsUDjHGWSlqvpAsMsesrtA, zkMigrationReady=false, listeners=[Endpoint(listenerName='CONTROLLER', securityProtocol=SASL_PLAINTEXT, host='kafka-controller-0.kafka-controller-headless.pre-logging.svc.cluster.local', port=9093)], supportedFeatures={metadata.version: 1-19}). (org.apache.kafka.controller.ClusterControlManager)
[2024-04-29 11:01:34,496] INFO [QuorumController id=0] Replayed RegisterBrokerRecord modifying the registration for broker 1: RegisterBrokerRecord(brokerId=1, isMigratingZkBroker=false, incarnationId=i7We8oJuQ5OXmi8K1v5jwA, brokerEpoch=17, endPoints=[BrokerEndpoint(name='INTERNAL', host='kafka-controller-1.kafka-controller-headless.pre-logging.svc.cluster.local', port=9094, securityProtocol=2), BrokerEndpoint(name='CLIENT', host='kafka-controller-1.kafka-controller-headless.pre-logging.svc.cluster.local', port=9092, securityProtocol=2)], features=[BrokerFeature(name='metadata.version', minSupportedVersion=1, maxSupportedVersion=19)], rack=null, fenced=true, inControlledShutdown=false, logDirs=[3nW2dIcb9Z7elmqyAx1_PQ]) (org.apache.kafka.controller.ClusterControlManager)
[2024-04-29 11:01:34,582] INFO [QuorumController id=0] Replayed RegisterBrokerRecord modifying the registration for broker 0: RegisterBrokerRecord(brokerId=0, isMigratingZkBroker=false, incarnationId=IiE1Pz-wSVKvIDICKRcQsQ, brokerEpoch=18, endPoints=[BrokerEndpoint(name='INTERNAL', host='kafka-controller-0.kafka-controller-headless.pre-logging.svc.cluster.local', port=9094, securityProtocol=2), BrokerEndpoint(name='CLIENT', host='kafka-controller-0.kafka-controller-headless.pre-logging.svc.cluster.local', port=9092, securityProtocol=2)], features=[BrokerFeature(name='metadata.version', minSupportedVersion=1, maxSupportedVersion=19)], rack=null, fenced=true, inControlledShutdown=false, logDirs=[d9lo71YK9ZzacyhfbVljZQ]) (org.apache.kafka.controller.ClusterControlManager)
[2024-04-29 11:01:34,601] INFO [ControllerRegistrationManager id=0 incarnation=FsUDjHGWSlqvpAsMsesrtA] Our registration has been persisted to the metadata log. (kafka.server.ControllerRegistrationManager)
[2024-04-29 11:01:34,601] INFO [ControllerRegistrationManager id=0 incarnation=FsUDjHGWSlqvpAsMsesrtA] RegistrationResponseHandler: controller acknowledged ControllerRegistrationRequest. (kafka.server.ControllerRegistrationManager)
[2024-04-29 11:01:34,602] INFO [QuorumController id=0] processBrokerHeartbeat: event failed with StaleBrokerEpochException in 136 microseconds. (org.apache.kafka.controller.QuorumController)
[2024-04-29 11:01:34,683] INFO [BrokerLifecycleManager id=0] Successfully registered broker 0 with broker epoch 18 (kafka.server.BrokerLifecycleManager)
[2024-04-29 11:01:34,685] INFO [QuorumController id=0] Replayed RegisterBrokerRecord modifying the registration for broker 0: RegisterBrokerRecord(brokerId=0, isMigratingZkBroker=false, incarnationId=IiE1Pz-wSVKvIDICKRcQsQ, brokerEpoch=20, endPoints=[BrokerEndpoint(name='INTERNAL', host='kafka-controller-0.kafka-controller-headless.pre-logging.svc.cluster.local', port=9094, securityProtocol=2), BrokerEndpoint(name='CLIENT', host='kafka-controller-0.kafka-controller-headless.pre-logging.svc.cluster.local', port=9092, securityProtocol=2)], features=[BrokerFeature(name='metadata.version', minSupportedVersion=1, maxSupportedVersion=19)], rack=null, fenced=true, inControlledShutdown=false, logDirs=[d9lo71YK9ZzacyhfbVljZQ]) (org.apache.kafka.controller.ClusterControlManager)
[2024-04-29 11:01:34,730] INFO [BrokerLifecycleManager id=0] Successfully registered broker 0 with broker epoch 20 (kafka.server.BrokerLifecycleManager)
[2024-04-29 11:01:34,731] INFO [QuorumController id=0] processBrokerHeartbeat: event failed with StaleBrokerEpochException in 70 microseconds. (org.apache.kafka.controller.QuorumController)
[2024-04-29 11:01:34,733] WARN [BrokerLifecycleManager id=0] Broker 0 sent a heartbeat request but received error STALE_BROKER_EPOCH. (kafka.server.BrokerLifecycleManager)
[2024-04-29 11:01:34,735] INFO [BrokerLifecycleManager id=0] The broker has caught up. Transitioning from STARTING to RECOVERY. (kafka.server.BrokerLifecycleManager)
[2024-04-29 11:01:34,735] INFO [BrokerServer id=0] Finished waiting for the controller to acknowledge that we are caught up (kafka.server.BrokerServer)
[2024-04-29 11:01:34,736] INFO [BrokerServer id=0] Waiting for the initial broker metadata update to be published (kafka.server.BrokerServer)
[2024-04-29 11:01:34,736] INFO [BrokerServer id=0] Finished waiting for the initial broker metadata update to be published (kafka.server.BrokerServer)
[2024-04-29 11:01:34,738] INFO KafkaConfig values: 
        advertised.listeners = CLIENT://kafka-controller-0.kafka-controller-headless.pre-logging.svc.cluster.local:9092,INTERNAL://kafka-controller-0.kafka-controller-headless.pre-logging.svc.cluster.local:9094
        alter.config.policy.class.name = null
        alter.log.dirs.replication.quota.window.num = 11
        alter.log.dirs.replication.quota.window.size.seconds = 1
        authorizer.class.name = 
        auto.create.topics.enable = true
        auto.include.jmx.reporter = true
        auto.leader.rebalance.enable = true
        background.threads = 10
        broker.heartbeat.interval.ms = 2000
        broker.id = 0
        broker.id.generation.enable = true
        broker.rack = null
        broker.session.timeout.ms = 9000
        client.quota.callback.class = null
        compression.type = producer
        connection.failed.authentication.delay.ms = 100
        connections.max.idle.ms = 600000
        connections.max.reauth.ms = 0
        control.plane.listener.name = null
        controlled.shutdown.enable = true
        controlled.shutdown.max.retries = 3
        controlled.shutdown.retry.backoff.ms = 5000
        controller.listener.names = CONTROLLER
        controller.quorum.append.linger.ms = 25
        controller.quorum.election.backoff.max.ms = 1000
        controller.quorum.election.timeout.ms = 1000
        controller.quorum.fetch.timeout.ms = 2000
        controller.quorum.request.timeout.ms = 2000
        controller.quorum.retry.backoff.ms = 20
        controller.quorum.voters = [0@kafka-controller-0.kafka-controller-headless.pre-logging.svc.cluster.local:9093, 1@kafka-controller-1.kafka-controller-headless.pre-logging.svc.cluster.local:9093, 2@kafka-controller-2.kafka-controller-headless.pre-logging.svc.cluster.local:9093]
        controller.quota.window.num = 11
        controller.quota.window.size.seconds = 1
        controller.socket.timeout.ms = 30000
        create.topic.policy.class.name = null
        default.replication.factor = 1
        delegation.token.expiry.check.interval.ms = 3600000
        delegation.token.expiry.time.ms = 86400000
        delegation.token.master.key = null
        delegation.token.max.lifetime.ms = 604800000
        delegation.token.secret.key = null
        delete.records.purgatory.purge.interval.requests = 1
        delete.topic.enable = true
        early.start.listeners = null
        eligible.leader.replicas.enable = false
        fetch.max.bytes = 57671680
        fetch.purgatory.purge.interval.requests = 1000
        group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.UniformAssignor, org.apache.kafka.coordinator.group.assignor.RangeAssignor]
        group.consumer.heartbeat.interval.ms = 5000
        group.consumer.max.heartbeat.interval.ms = 15000
        group.consumer.max.session.timeout.ms = 60000
        group.consumer.max.size = 2147483647
        group.consumer.min.heartbeat.interval.ms = 5000
        group.consumer.min.session.timeout.ms = 45000
        group.consumer.session.timeout.ms = 45000
        group.coordinator.new.enable = false
        group.coordinator.rebalance.protocols = [classic]
        group.coordinator.threads = 1
        group.initial.rebalance.delay.ms = 3000
        group.max.session.timeout.ms = 1800000
        group.max.size = 2147483647
        group.min.session.timeout.ms = 6000
        initial.broker.registration.timeout.ms = 60000
        inter.broker.listener.name = INTERNAL
        inter.broker.protocol.version = 3.7-IV4
        kafka.metrics.polling.interval.secs = 10
        kafka.metrics.reporters = []
        leader.imbalance.check.interval.seconds = 300
        leader.imbalance.per.broker.percentage = 10
        listener.security.protocol.map = CLIENT:SASL_PLAINTEXT,INTERNAL:SASL_PLAINTEXT,CONTROLLER:SASL_PLAINTEXT
        listeners = CLIENT://:9092,INTERNAL://:9094,CONTROLLER://:9093
        log.cleaner.backoff.ms = 15000
        log.cleaner.dedupe.buffer.size = 134217728
        log.cleaner.delete.retention.ms = 86400000
        log.cleaner.enable = true
        log.cleaner.io.buffer.load.factor = 0.9
        log.cleaner.io.buffer.size = 524288
        log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
        log.cleaner.max.compaction.lag.ms = 9223372036854775807
        log.cleaner.min.cleanable.ratio = 0.5
        log.cleaner.min.compaction.lag.ms = 0
        log.cleaner.threads = 1
        log.cleanup.policy = [delete]
        log.dir = /bitnami/kafka/data
        log.dirs = null
        log.flush.interval.messages = 9223372036854775807
        log.flush.interval.ms = null
        log.flush.offset.checkpoint.interval.ms = 60000
        log.flush.scheduler.interval.ms = 9223372036854775807
        log.flush.start.offset.checkpoint.interval.ms = 60000
        log.index.interval.bytes = 4096
        log.index.size.max.bytes = 10485760
        log.local.retention.bytes = -2
        log.local.retention.ms = -2
        log.message.downconversion.enable = true
        log.message.format.version = 3.0-IV1
        log.message.timestamp.after.max.ms = 9223372036854775807
        log.message.timestamp.before.max.ms = 9223372036854775807
        log.message.timestamp.difference.max.ms = 9223372036854775807
        log.message.timestamp.type = CreateTime
        log.preallocate = false
        log.retention.bytes = -1
        log.retention.check.interval.ms = 300000
        log.retention.hours = 168
        log.retention.minutes = null
        log.retention.ms = null
        log.roll.hours = 168
        log.roll.jitter.hours = 0
        log.roll.jitter.ms = null
        log.roll.ms = null
        log.segment.bytes = 1073741824
        log.segment.delete.delay.ms = 60000
        max.connection.creation.rate = 2147483647
        max.connections = 2147483647
        max.connections.per.ip = 2147483647
        max.connections.per.ip.overrides = 
        max.incremental.fetch.session.cache.slots = 1000
        message.max.bytes = 1048588
        metadata.log.dir = null
        metadata.log.max.record.bytes.between.snapshots = 20971520
        metadata.log.max.snapshot.interval.ms = 3600000
        metadata.log.segment.bytes = 1073741824
        metadata.log.segment.min.bytes = 8388608
        metadata.log.segment.ms = 604800000
        metadata.max.idle.interval.ms = 500
        metadata.max.retention.bytes = 104857600
        metadata.max.retention.ms = 604800000
        metric.reporters = []
        metrics.num.samples = 2
        metrics.recording.level = INFO
        metrics.sample.window.ms = 30000
        min.insync.replicas = 1
        node.id = 0
        num.io.threads = 8
        num.network.threads = 3
        num.partitions = 1
        num.recovery.threads.per.data.dir = 1
        num.replica.alter.log.dirs.threads = null
        num.replica.fetchers = 1
        offset.metadata.max.bytes = 4096
        offsets.commit.required.acks = -1
        offsets.commit.timeout.ms = 5000
        offsets.load.buffer.size = 5242880
        offsets.retention.check.interval.ms = 600000
        offsets.retention.minutes = 10080
        offsets.topic.compression.codec = 0
        offsets.topic.num.partitions = 50
        offsets.topic.replication.factor = 3
        offsets.topic.segment.bytes = 104857600
        password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
        password.encoder.iterations = 4096
        password.encoder.key.length = 128
        password.encoder.keyfactory.algorithm = null
        password.encoder.old.secret = null
        password.encoder.secret = null
        principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder
        process.roles = [controller, broker]
        producer.id.expiration.check.interval.ms = 600000
        producer.id.expiration.ms = 86400000
        producer.purgatory.purge.interval.requests = 1000
        queued.max.request.bytes = -1
        queued.max.requests = 500
        quota.window.num = 11
        quota.window.size.seconds = 1
        remote.log.index.file.cache.total.size.bytes = 1073741824
        remote.log.manager.task.interval.ms = 30000
        remote.log.manager.task.retry.backoff.max.ms = 30000
        remote.log.manager.task.retry.backoff.ms = 500
        remote.log.manager.task.retry.jitter = 0.2
        remote.log.manager.thread.pool.size = 10
        remote.log.metadata.custom.metadata.max.bytes = 128
        remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager
        remote.log.metadata.manager.class.path = null
        remote.log.metadata.manager.impl.prefix = rlmm.config.
        remote.log.metadata.manager.listener.name = null
        remote.log.reader.max.pending.tasks = 100
        remote.log.reader.threads = 10
        remote.log.storage.manager.class.name = null
        remote.log.storage.manager.class.path = null
        remote.log.storage.manager.impl.prefix = rsm.config.
        remote.log.storage.system.enable = false
        replica.fetch.backoff.ms = 1000
        replica.fetch.max.bytes = 1048576
        replica.fetch.min.bytes = 1
        replica.fetch.response.max.bytes = 10485760
        replica.fetch.wait.max.ms = 500
        replica.high.watermark.checkpoint.interval.ms = 5000
        replica.lag.time.max.ms = 30000
        replica.selector.class = null
        replica.socket.receive.buffer.bytes = 65536
        replica.socket.timeout.ms = 30000
        replication.quota.window.num = 11
        replication.quota.window.size.seconds = 1
        request.timeout.ms = 30000
        reserved.broker.max.id = 1000
        sasl.client.callback.handler.class = null
        sasl.enabled.mechanisms = [PLAIN, SCRAM-SHA-256, SCRAM-SHA-512]
        sasl.jaas.config = null
        sasl.kerberos.kinit.cmd = /usr/bin/kinit
        sasl.kerberos.min.time.before.relogin = 60000
        sasl.kerberos.principal.to.local.rules = [DEFAULT]
        sasl.kerberos.service.name = null
        sasl.kerberos.ticket.renew.jitter = 0.05
        sasl.kerberos.ticket.renew.window.factor = 0.8
        sasl.login.callback.handler.class = null
        sasl.login.class = null
        sasl.login.connect.timeout.ms = null
        sasl.login.read.timeout.ms = null
        sasl.login.refresh.buffer.seconds = 300
        sasl.login.refresh.min.period.seconds = 60
        sasl.login.refresh.window.factor = 0.8
        sasl.login.refresh.window.jitter = 0.05
        sasl.login.retry.backoff.max.ms = 10000
        sasl.login.retry.backoff.ms = 100
        sasl.mechanism.controller.protocol = PLAIN
        sasl.mechanism.inter.broker.protocol = PLAIN
        sasl.oauthbearer.clock.skew.seconds = 30
        sasl.oauthbearer.expected.audience = null
        sasl.oauthbearer.expected.issuer = null
        sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
        sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
        sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
        sasl.oauthbearer.jwks.endpoint.url = null
        sasl.oauthbearer.scope.claim.name = scope
        sasl.oauthbearer.sub.claim.name = sub
        sasl.oauthbearer.token.endpoint.url = null
        sasl.server.callback.handler.class = null
        sasl.server.max.receive.size = 524288
        security.inter.broker.protocol = PLAINTEXT
        security.providers = null
        server.max.startup.time.ms = 9223372036854775807
        socket.connection.setup.timeout.max.ms = 30000
        socket.connection.setup.timeout.ms = 10000
        socket.listen.backlog.size = 50
        socket.receive.buffer.bytes = 102400
        socket.request.max.bytes = 104857600
        socket.send.buffer.bytes = 102400
        ssl.allow.dn.changes = false
        ssl.allow.san.changes = false
        ssl.cipher.suites = []
        ssl.client.auth = none
        ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
        ssl.endpoint.identification.algorithm = https
        ssl.engine.factory.class = null
        ssl.key.password = null
        ssl.keymanager.algorithm = SunX509
        ssl.keystore.certificate.chain = null
        ssl.keystore.key = null
        ssl.keystore.location = null
        ssl.keystore.password = null
        ssl.keystore.type = JKS
        ssl.principal.mapping.rules = DEFAULT
        ssl.protocol = TLSv1.3
        ssl.provider = null
        ssl.secure.random.implementation = null
        ssl.trustmanager.algorithm = PKIX
        ssl.truststore.certificates = null
        ssl.truststore.location = null
        ssl.truststore.password = null
        ssl.truststore.type = JKS
        telemetry.max.bytes = 1048576
        transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000
        transaction.max.timeout.ms = 900000
        transaction.partition.verification.enable = true
        transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
        transaction.state.log.load.buffer.size = 5242880
        transaction.state.log.min.isr = 2
        transaction.state.log.num.partitions = 50
        transaction.state.log.replication.factor = 3
        transaction.state.log.segment.bytes = 104857600
        transactional.id.expiration.ms = 604800000
        unclean.leader.election.enable = false
        unstable.api.versions.enable = false
        unstable.metadata.versions.enable = false
        zookeeper.clientCnxnSocket = null
        zookeeper.connect = null
        zookeeper.connection.timeout.ms = null
        zookeeper.max.in.flight.requests = 10
        zookeeper.metadata.migration.enable = false
        zookeeper.metadata.migration.min.batch.size = 200
        zookeeper.session.timeout.ms = 18000
        zookeeper.set.acl = false
        zookeeper.ssl.cipher.suites = null
        zookeeper.ssl.client.enable = false
        zookeeper.ssl.crl.enable = false
        zookeeper.ssl.enabled.protocols = null
        zookeeper.ssl.endpoint.identification.algorithm = HTTPS
        zookeeper.ssl.keystore.location = null
        zookeeper.ssl.keystore.password = null
        zookeeper.ssl.keystore.type = null
        zookeeper.ssl.ocsp.enable = false
        zookeeper.ssl.protocol = TLSv1.2
        zookeeper.ssl.truststore.location = null
        zookeeper.ssl.truststore.password = null
        zookeeper.ssl.truststore.type = null
 (kafka.server.KafkaConfig)
[2024-04-29 11:01:34,782] INFO [BrokerLifecycleManager id=0] The broker is in RECOVERY. (kafka.server.BrokerLifecycleManager)
[2024-04-29 11:01:34,783] INFO [QuorumController id=0] The request from broker 1 to unfence has been granted because it has caught up with the offset of its register broker record 17. (org.apache.kafka.controller.BrokerHeartbeatManager)
[2024-04-29 11:01:34,788] INFO [BrokerServer id=0] Waiting for the broker to be unfenced (kafka.server.BrokerServer)
[2024-04-29 11:01:34,796] INFO [QuorumController id=0] Replayed BrokerRegistrationChangeRecord modifying the registration for broker 1: BrokerRegistrationChangeRecord(brokerId=1, brokerEpoch=17, fenced=-1, inControlledShutdown=0, logDirs=[]) (org.apache.kafka.controller.ClusterControlManager)
[2024-04-29 11:01:34,796] INFO [QuorumController id=0] The request from broker 0 to unfence has been granted because it has caught up with the offset of its register broker record 20. (org.apache.kafka.controller.BrokerHeartbeatManager)
[2024-04-29 11:01:34,797] INFO [QuorumController id=0] Replayed BrokerRegistrationChangeRecord modifying the registration for broker 0: BrokerRegistrationChangeRecord(brokerId=0, brokerEpoch=20, fenced=-1, inControlledShutdown=0, logDirs=[]) (org.apache.kafka.controller.ClusterControlManager)
[2024-04-29 11:01:34,918] INFO [BrokerLifecycleManager id=0] The broker has been unfenced. Transitioning from RECOVERY to RUNNING. (kafka.server.BrokerLifecycleManager)
[2024-04-29 11:01:34,981] INFO [BrokerServer id=0] Finished waiting for the broker to be unfenced (kafka.server.BrokerServer)
[2024-04-29 11:01:34,981] INFO [QuorumController id=0] The request from broker 2 to unfence has been granted because it has caught up with the offset of its register broker record 14. (org.apache.kafka.controller.BrokerHeartbeatManager)
[2024-04-29 11:01:34,981] INFO [QuorumController id=0] Replayed BrokerRegistrationChangeRecord modifying the registration for broker 2: BrokerRegistrationChangeRecord(brokerId=2, brokerEpoch=14, fenced=-1, inControlledShutdown=0, logDirs=[]) (org.apache.kafka.controller.ClusterControlManager)
[2024-04-29 11:01:34,982] INFO authorizerStart completed for endpoint INTERNAL. Endpoint is now READY. (org.apache.kafka.server.network.EndpointReadyFutures)
[2024-04-29 11:01:34,982] INFO authorizerStart completed for endpoint CLIENT. Endpoint is now READY. (org.apache.kafka.server.network.EndpointReadyFutures)
[2024-04-29 11:01:34,982] INFO [SocketServer listenerType=BROKER, nodeId=0] Enabling request processing. (kafka.network.SocketServer)
[2024-04-29 11:01:34,983] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor)
[2024-04-29 11:01:34,984] INFO Awaiting socket connections on 0.0.0.0:9094. (kafka.network.DataPlaneAcceptor)
[2024-04-29 11:01:34,987] INFO [BrokerServer id=0] Waiting for all of the authorizer futures to be completed (kafka.server.BrokerServer)
[2024-04-29 11:01:34,987] INFO [BrokerServer id=0] Finished waiting for all of the authorizer futures to be completed (kafka.server.BrokerServer)
[2024-04-29 11:01:34,987] INFO [BrokerServer id=0] Waiting for all of the SocketServer Acceptors to be started (kafka.server.BrokerServer)
[2024-04-29 11:01:34,987] INFO [BrokerServer id=0] Finished waiting for all of the SocketServer Acceptors to be started (kafka.server.BrokerServer)
[2024-04-29 11:01:34,987] INFO [BrokerServer id=0] Transition from STARTING to STARTED (kafka.server.BrokerServer)
[2024-04-29 11:01:34,988] INFO Kafka version: 3.7.0 (org.apache.kafka.common.utils.AppInfoParser)
[2024-04-29 11:01:34,988] INFO Kafka commitId: 2ae524ed625438c5 (org.apache.kafka.common.utils.AppInfoParser)
[2024-04-29 11:01:34,988] INFO Kafka startTimeMs: 1714388494987 (org.apache.kafka.common.utils.AppInfoParser)
[2024-04-29 11:01:34,990] INFO [KafkaRaftServer nodeId=0] Kafka Server started (kafka.server.KafkaRaftServer)

And I not getting the Failed authentication .... message.

You need to wait for more than 10 minutes before it will appear. It won't appear when it's just started

@rafariossaa
Copy link
Contributor

I waited 40 minutes and no errors appeared:

$ kubectl get pods -n pre-logging 
NAME                 READY   STATUS    RESTARTS   AGE
kafka-controller-0   1/1     Running   0          39m
kafka-controller-1   1/1     Running   0          39m
kafka-controller-2   1/1     Running   0          39m


$ kubectl logs -n pre-logging kafka-controller-0
...
(kafka.server.KafkaConfig)
[2024-05-09 07:28:53,806] INFO [BrokerLifecycleManager id=0] The broker is in RECOVERY. (kafka.server.BrokerLifecycleManager)
[2024-05-09 07:28:53,806] INFO [BrokerServer id=0] Waiting for the broker to be unfenced (kafka.server.BrokerServer)
[2024-05-09 07:28:53,891] INFO [BrokerLifecycleManager id=0] The broker has been unfenced. Transitioning from RECOVERY to RUNNING. (kafka.server.BrokerLifecycleManager)
[2024-05-09 07:28:53,891] INFO [BrokerServer id=0] Finished waiting for the broker to be unfenced (kafka.server.BrokerServer)
[2024-05-09 07:28:53,892] INFO authorizerStart completed for endpoint INTERNAL. Endpoint is now READY. (org.apache.kafka.server.network.EndpointReadyFutures)
[2024-05-09 07:28:53,892] INFO authorizerStart completed for endpoint CLIENT. Endpoint is now READY. (org.apache.kafka.server.network.EndpointReadyFutures)
[2024-05-09 07:28:53,892] INFO [SocketServer listenerType=BROKER, nodeId=0] Enabling request processing. (kafka.network.SocketServer)
[2024-05-09 07:28:53,893] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor)
[2024-05-09 07:28:53,894] INFO Awaiting socket connections on 0.0.0.0:9094. (kafka.network.DataPlaneAcceptor)
[2024-05-09 07:28:53,897] INFO [BrokerServer id=0] Waiting for all of the authorizer futures to be completed (kafka.server.BrokerServer)
[2024-05-09 07:28:53,897] INFO [BrokerServer id=0] Finished waiting for all of the authorizer futures to be completed (kafka.server.BrokerServer)
[2024-05-09 07:28:53,897] INFO [BrokerServer id=0] Waiting for all of the SocketServer Acceptors to be started (kafka.server.BrokerServer)
[2024-05-09 07:28:53,897] INFO [BrokerServer id=0] Finished waiting for all of the SocketServer Acceptors to be started (kafka.server.BrokerServer)
[2024-05-09 07:28:53,897] INFO [BrokerServer id=0] Transition from STARTING to STARTED (kafka.server.BrokerServer)
[2024-05-09 07:28:53,898] INFO Kafka version: 3.7.0 (org.apache.kafka.common.utils.AppInfoParser)
[2024-05-09 07:28:53,898] INFO Kafka commitId: 2ae524ed625438c5 (org.apache.kafka.common.utils.AppInfoParser)
[2024-05-09 07:28:53,898] INFO Kafka startTimeMs: 1715239733897 (org.apache.kafka.common.utils.AppInfoParser)
[2024-05-09 07:28:53,900] INFO [KafkaRaftServer nodeId=0] Kafka Server started (kafka.server.KafkaRaftServer)
[2024-05-09 07:28:53,905] INFO [QuorumController id=0] Replayed BrokerRegistrationChangeRecord modifying the registration for broker 0: BrokerRegistrationChangeRecord(brokerId=0, brokerEpoch=17, fenced=-1, inControlledShutdown=0, logDirs=[]) (org.apache.kafka.controller.ClusterControlManager)
[2024-05-09 07:28:54,240] INFO [QuorumController id=0] Replayed BrokerRegistrationChangeRecord modifying the registration for broker 1: BrokerRegistrationChangeRecord(brokerId=1, brokerEpoch=18, fenced=-1, inControlledShutdown=0, logDirs=[]) (org.apache.kafka.controller.ClusterControlManager)
[2024-05-09 07:38:51,290] INFO [RaftManager id=0] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient)
[2024-05-09 07:38:53,682] INFO [NodeToControllerChannelManager id=0 name=registration] Node 2 disconnected. (org.apache.kafka.clients.NetworkClient)

@SanthappanavaraShruthi
Copy link

We also have the similar setup with 3 brokers.
I see exactly same error in one of the broker and other 2 seems to be fine.

[2024-05-09 10:27:14,590] INFO [SocketServer listenerType=BROKER, nodeId=2] Failed authentication with /xx.x.x.xxx (channelId=xx.x.x.xx:9094-xx.x.x.xxx:59288-261) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2024-05-09 10:27:24,589] INFO [SocketServer listenerType=BROKER, nodeId=2] Failed authentication with /xx.x.x.xxx (channelId=xx.x.x.xx:9094-xx.x.x.xxx:51848-263) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)

@zjf88865
Copy link
Author

We also have the similar setup with 3 brokers. I see exactly same error in one of the broker and other 2 seems to be fine.

[2024-05-09 10:27:14,590] INFO [SocketServer listenerType=BROKER, nodeId=2] Failed authentication with /xx.x.x.xxx (channelId=xx.x.x.xx:9094-xx.x.x.xxx:59288-261) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector) [2024-05-09 10:27:24,589] INFO [SocketServer listenerType=BROKER, nodeId=2] Failed authentication with /xx.x.x.xxx (channelId=xx.x.x.xx:9094-xx.x.x.xxx:51848-263) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)

Yes, I have raised this issue for a long time, but no one has dealt with it. If security is a concern, configurable parameters should be provided. However, this project has not been maintained, and upgrading the Helm repository version rashly will bring huge losses. The key issue has been raised for a long time but has not been resolved. My program is no longer able to link to my Kafka, but fortunately it is only in my testing environment, otherwise it will cause huge losses

@zjf88865
Copy link
Author

I waited 40 minutes and no errors appeared:

$ kubectl get pods -n pre-logging 
NAME                 READY   STATUS    RESTARTS   AGE
kafka-controller-0   1/1     Running   0          39m
kafka-controller-1   1/1     Running   0          39m
kafka-controller-2   1/1     Running   0          39m


$ kubectl logs -n pre-logging kafka-controller-0
...
(kafka.server.KafkaConfig)
[2024-05-09 07:28:53,806] INFO [BrokerLifecycleManager id=0] The broker is in RECOVERY. (kafka.server.BrokerLifecycleManager)
[2024-05-09 07:28:53,806] INFO [BrokerServer id=0] Waiting for the broker to be unfenced (kafka.server.BrokerServer)
[2024-05-09 07:28:53,891] INFO [BrokerLifecycleManager id=0] The broker has been unfenced. Transitioning from RECOVERY to RUNNING. (kafka.server.BrokerLifecycleManager)
[2024-05-09 07:28:53,891] INFO [BrokerServer id=0] Finished waiting for the broker to be unfenced (kafka.server.BrokerServer)
[2024-05-09 07:28:53,892] INFO authorizerStart completed for endpoint INTERNAL. Endpoint is now READY. (org.apache.kafka.server.network.EndpointReadyFutures)
[2024-05-09 07:28:53,892] INFO authorizerStart completed for endpoint CLIENT. Endpoint is now READY. (org.apache.kafka.server.network.EndpointReadyFutures)
[2024-05-09 07:28:53,892] INFO [SocketServer listenerType=BROKER, nodeId=0] Enabling request processing. (kafka.network.SocketServer)
[2024-05-09 07:28:53,893] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor)
[2024-05-09 07:28:53,894] INFO Awaiting socket connections on 0.0.0.0:9094. (kafka.network.DataPlaneAcceptor)
[2024-05-09 07:28:53,897] INFO [BrokerServer id=0] Waiting for all of the authorizer futures to be completed (kafka.server.BrokerServer)
[2024-05-09 07:28:53,897] INFO [BrokerServer id=0] Finished waiting for all of the authorizer futures to be completed (kafka.server.BrokerServer)
[2024-05-09 07:28:53,897] INFO [BrokerServer id=0] Waiting for all of the SocketServer Acceptors to be started (kafka.server.BrokerServer)
[2024-05-09 07:28:53,897] INFO [BrokerServer id=0] Finished waiting for all of the SocketServer Acceptors to be started (kafka.server.BrokerServer)
[2024-05-09 07:28:53,897] INFO [BrokerServer id=0] Transition from STARTING to STARTED (kafka.server.BrokerServer)
[2024-05-09 07:28:53,898] INFO Kafka version: 3.7.0 (org.apache.kafka.common.utils.AppInfoParser)
[2024-05-09 07:28:53,898] INFO Kafka commitId: 2ae524ed625438c5 (org.apache.kafka.common.utils.AppInfoParser)
[2024-05-09 07:28:53,898] INFO Kafka startTimeMs: 1715239733897 (org.apache.kafka.common.utils.AppInfoParser)
[2024-05-09 07:28:53,900] INFO [KafkaRaftServer nodeId=0] Kafka Server started (kafka.server.KafkaRaftServer)
[2024-05-09 07:28:53,905] INFO [QuorumController id=0] Replayed BrokerRegistrationChangeRecord modifying the registration for broker 0: BrokerRegistrationChangeRecord(brokerId=0, brokerEpoch=17, fenced=-1, inControlledShutdown=0, logDirs=[]) (org.apache.kafka.controller.ClusterControlManager)
[2024-05-09 07:28:54,240] INFO [QuorumController id=0] Replayed BrokerRegistrationChangeRecord modifying the registration for broker 1: BrokerRegistrationChangeRecord(brokerId=1, brokerEpoch=18, fenced=-1, inControlledShutdown=0, logDirs=[]) (org.apache.kafka.controller.ClusterControlManager)
[2024-05-09 07:38:51,290] INFO [RaftManager id=0] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient)
[2024-05-09 07:38:53,682] INFO [NodeToControllerChannelManager id=0 name=registration] Node 2 disconnected. (org.apache.kafka.clients.NetworkClient)

Many people have mentioned this serious bug in the comments section now. Please help to handle it, as it has caused us some friction losses

Copy link

This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.

@github-actions github-actions bot added the stale 15 days without activity label May 26, 2024
@rafariossaa
Copy link
Contributor

I need to reproduce the issue in order to identify the cause and look for a fix.
If I can not reproduce it on my side and it is not trivial is very difficult to fix it.

Which kubernetes cluster are you using ?

@SanthappanavaraShruthi
Copy link

I need to reproduce the issue in order to identify the cause and look for a fix. If I can not reproduce it on my side and it is not trivial is very difficult to fix it.

Which kubernetes cluster are you using ?

Hi @rafariossaa , please find details from our end.

We have installed kafka on AWS EKS with version 28.0.3, image tag 3.7.0-debian-12-r3, enabled SASL_PLAINTEXT authentication and accessing kafka brokers from outside cluster using clusterIP and ingress nginx service.

Error: [2024-05-16 07:30:01,490] INFO [SocketServer listenerType=BROKER, nodeId=1] Failed authentication with /x.x.3.92 (channelId=x.x.3.100:9095-x.x.3.92:50650-88) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)

authentication failed with ingress-nginx-controller pod(x.x.3.92) with the broker x.x.3.100:9095.

Even though error is appearing in the log we are able to send and consume messages form kafka brokers. Let me know if we have missed any configuration.

@zjf88865
Copy link
Author

zjf88865 commented May 30, 2024

I need to reproduce the issue in order to identify the cause and look for a fix. If I can not reproduce it on my side and it is not trivial is very difficult to fix it.

Which kubernetes cluster are you using ?

[root@master ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.9", GitCommit:"c1de2d70269039fe55efb98e737d9a29f9155246", GitTreeState:"clean", BuildDate:"2022-07-13T14:26:51Z", GoVersion:"go1.17.11", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.9", GitCommit:"c1de2d70269039fe55efb98e737d9a29f9155246", GitTreeState:"clean", BuildDate:"2022-07-13T14:19:57Z", GoVersion:"go1.17.11", Compiler:"gc", Platform:"linux/amd64"}
[root@master ~]# helm version  
version.BuildInfo{Version:"v3.12.1", GitCommit:"f32a527a060157990e2aa86bf45010dfb3cc8b8d", GitTreeState:"clean", GoVersion:"go1.20.4"}

kafkaVersion=3.7.0

image tag bitnami/kafka:3.7.0-debian-12-r6

Copy link

github-actions bot commented Jun 5, 2024

Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.

@github-actions github-actions bot added the solved label Jun 5, 2024
@bitnami-bot bitnami-bot closed this as not planned Won't fix, can't repro, duplicate, stale Jun 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kafka solved stale 15 days without activity tech-issues The user has a technical issue about an application
Projects
None yet
Development

No branches or pull requests

6 participants