Skip to content
This repository has been archived by the owner on Oct 7, 2023. It is now read-only.

Kafka fails to start: Zookeeper namespace does not exist #75

Closed
F21 opened this issue Jul 11, 2017 · 4 comments
Closed

Kafka fails to start: Zookeeper namespace does not exist #75

F21 opened this issue Jul 11, 2017 · 4 comments

Comments

@F21
Copy link

F21 commented Jul 11, 2017

I have a Kafka 0.11 and Zookeeper 3.4 setup running flawlessly in docker.

I want to replace Zookeeper with etcd and zetcd.

This is the error I get when I start kafka:

m9edd51-zetcd.m9edd51 (172.18.0.3:2181) open
[2017-07-11 01:32:09,769] INFO KafkaConfig values: 
	advertised.host.name = null
	advertised.listeners = null
	advertised.port = null
	alter.config.policy.class.name = null
	authorizer.class.name = 
	auto.create.topics.enable = true
	auto.leader.rebalance.enable = true
	background.threads = 10
	broker.id = -1
	broker.id.generation.enable = true
	broker.rack = null
	compression.type = producer
	connections.max.idle.ms = 600000
	controlled.shutdown.enable = true
	controlled.shutdown.max.retries = 3
	controlled.shutdown.retry.backoff.ms = 5000
	controller.socket.timeout.ms = 30000
	create.topic.policy.class.name = null
	default.replication.factor = 1
	delete.records.purgatory.purge.interval.requests = 1
	delete.topic.enable = true
	fetch.purgatory.purge.interval.requests = 1000
	group.initial.rebalance.delay.ms = 0
	group.max.session.timeout.ms = 300000
	group.min.session.timeout.ms = 6000
	host.name = 
	inter.broker.listener.name = null
	inter.broker.protocol.version = 0.11.0-IV2
	leader.imbalance.check.interval.seconds = 300
	leader.imbalance.per.broker.percentage = 10
	listener.security.protocol.map = SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,TRACE:TRACE,SASL_SSL:SASL_SSL,PLAINTEXT:PLAINTEXT
	listeners = null
	log.cleaner.backoff.ms = 15000
	log.cleaner.dedupe.buffer.size = 134217728
	log.cleaner.delete.retention.ms = 86400000
	log.cleaner.enable = true
	log.cleaner.io.buffer.load.factor = 0.9
	log.cleaner.io.buffer.size = 524288
	log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
	log.cleaner.min.cleanable.ratio = 0.5
	log.cleaner.min.compaction.lag.ms = 0
	log.cleaner.threads = 1
	log.cleanup.policy = [delete]
	log.dir = /tmp/kafka-logs
	log.dirs = /var/lib/kafka/data
	log.flush.interval.messages = 9223372036854775807
	log.flush.interval.ms = null
	log.flush.offset.checkpoint.interval.ms = 60000
	log.flush.scheduler.interval.ms = 9223372036854775807
	log.flush.start.offset.checkpoint.interval.ms = 60000
	log.index.interval.bytes = 4096
	log.index.size.max.bytes = 10485760
	log.message.format.version = 0.11.0-IV2
	log.message.timestamp.difference.max.ms = 9223372036854775807
	log.message.timestamp.type = CreateTime
	log.preallocate = false
	log.retention.bytes = -1
	log.retention.check.interval.ms = 300000
	log.retention.hours = 168
	log.retention.minutes = null
	log.retention.ms = null
	log.roll.hours = 168
	log.roll.jitter.hours = 0
	log.roll.jitter.ms = null
	log.roll.ms = null
	log.segment.bytes = 1073741824
	log.segment.delete.delay.ms = 60000
	max.connections.per.ip = 2147483647
	max.connections.per.ip.overrides = 
	message.max.bytes = 1000012
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	min.insync.replicas = 1
	num.io.threads = 8
	num.network.threads = 3
	num.partitions = 1
	num.recovery.threads.per.data.dir = 1
	num.replica.fetchers = 1
	offset.metadata.max.bytes = 4096
	offsets.commit.required.acks = -1
	offsets.commit.timeout.ms = 5000
	offsets.load.buffer.size = 5242880
	offsets.retention.check.interval.ms = 600000
	offsets.retention.minutes = 1440
	offsets.topic.compression.codec = 0
	offsets.topic.num.partitions = 50
	offsets.topic.replication.factor = 1
	offsets.topic.segment.bytes = 104857600
	port = 9092
	principal.builder.class = class org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
	producer.purgatory.purge.interval.requests = 1000
	queued.max.requests = 500
	quota.consumer.default = 9223372036854775807
	quota.producer.default = 9223372036854775807
	quota.window.num = 11
	quota.window.size.seconds = 1
	replica.fetch.backoff.ms = 1000
	replica.fetch.max.bytes = 1048576
	replica.fetch.min.bytes = 1
	replica.fetch.response.max.bytes = 10485760
	replica.fetch.wait.max.ms = 500
	replica.high.watermark.checkpoint.interval.ms = 5000
	replica.lag.time.max.ms = 10000
	replica.socket.receive.buffer.bytes = 65536
	replica.socket.timeout.ms = 30000
	replication.quota.window.num = 11
	replication.quota.window.size.seconds = 1
	request.timeout.ms = 30000
	reserved.broker.max.id = 1000
	sasl.enabled.mechanisms = [GSSAPI]
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.principal.to.local.rules = [DEFAULT]
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.mechanism.inter.broker.protocol = GSSAPI
	security.inter.broker.protocol = PLAINTEXT
	socket.receive.buffer.bytes = 102400
	socket.request.max.bytes = 104857600
	socket.send.buffer.bytes = 102400
	ssl.cipher.suites = null
	ssl.client.auth = none
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = null
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
	transaction.max.timeout.ms = 900000
	transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
	transaction.state.log.load.buffer.size = 5242880
	transaction.state.log.min.isr = 1
	transaction.state.log.num.partitions = 50
	transaction.state.log.replication.factor = 1
	transaction.state.log.segment.bytes = 104857600
	transactional.id.expiration.ms = 604800000
	unclean.leader.election.enable = false
	zookeeper.connect = m9edd51-zetcd.m9edd51:2181/kafka
	zookeeper.connection.timeout.ms = 6000
	zookeeper.session.timeout.ms = 6000
	zookeeper.set.acl = false
	zookeeper.sync.time.ms = 2000
 (kafka.server.KafkaConfig)
[2017-07-11 01:32:09,934] INFO starting (kafka.server.KafkaServer)
[2017-07-11 01:32:09,936] INFO Connecting to zookeeper on m9edd51-zetcd.m9edd51:2181/kafka (kafka.server.KafkaServer)
[2017-07-11 01:32:09,958] INFO Starting ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
[2017-07-11 01:32:09,971] INFO Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT (org.apache.zookeeper.ZooKeeper)
[2017-07-11 01:32:09,972] INFO Client environment:host.name=m9edd51-kafka1.m9edd51 (org.apache.zookeeper.ZooKeeper)
[2017-07-11 01:32:09,972] INFO Client environment:java.version=1.8.0_131 (org.apache.zookeeper.ZooKeeper)
[2017-07-11 01:32:09,972] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)
[2017-07-11 01:32:09,972] INFO Client environment:java.home=/usr/lib/jvm/java-1.8-openjdk/jre (org.apache.zookeeper.ZooKeeper)
[2017-07-11 01:32:09,972] INFO Client environment:java.class.path=:/opt/kafka/bin/../libs/aopalliance-repackaged-2.5.0-b05.jar:/opt/kafka/bin/../libs/argparse4j-0.7.0.jar:/opt/kafka/bin/../libs/commons-lang3-3.5.jar:/opt/kafka/bin/../libs/connect-api-0.11.0.0.jar:/opt/kafka/bin/../libs/connect-file-0.11.0.0.jar:/opt/kafka/bin/../libs/connect-json-0.11.0.0.jar:/opt/kafka/bin/../libs/connect-runtime-0.11.0.0.jar:/opt/kafka/bin/../libs/connect-transforms-0.11.0.0.jar:/opt/kafka/bin/../libs/guava-20.0.jar:/opt/kafka/bin/../libs/hk2-api-2.5.0-b05.jar:/opt/kafka/bin/../libs/hk2-locator-2.5.0-b05.jar:/opt/kafka/bin/../libs/hk2-utils-2.5.0-b05.jar:/opt/kafka/bin/../libs/jackson-annotations-2.8.5.jar:/opt/kafka/bin/../libs/jackson-core-2.8.5.jar:/opt/kafka/bin/../libs/jackson-databind-2.8.5.jar:/opt/kafka/bin/../libs/jackson-jaxrs-base-2.8.5.jar:/opt/kafka/bin/../libs/jackson-jaxrs-json-provider-2.8.5.jar:/opt/kafka/bin/../libs/jackson-module-jaxb-annotations-2.8.5.jar:/opt/kafka/bin/../libs/javassist-3.21.0-GA.jar:/opt/kafka/bin/../libs/javax.annotation-api-1.2.jar:/opt/kafka/bin/../libs/javax.inject-1.jar:/opt/kafka/bin/../libs/javax.inject-2.5.0-b05.jar:/opt/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/opt/kafka/bin/../libs/javax.ws.rs-api-2.0.1.jar:/opt/kafka/bin/../libs/jersey-client-2.24.jar:/opt/kafka/bin/../libs/jersey-common-2.24.jar:/opt/kafka/bin/../libs/jersey-container-servlet-2.24.jar:/opt/kafka/bin/../libs/jersey-container-servlet-core-2.24.jar:/opt/kafka/bin/../libs/jersey-guava-2.24.jar:/opt/kafka/bin/../libs/jersey-media-jaxb-2.24.jar:/opt/kafka/bin/../libs/jersey-server-2.24.jar:/opt/kafka/bin/../libs/jetty-continuation-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-http-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-io-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-security-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-server-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-servlet-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-servlets-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-util-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jopt-simple-5.0.3.jar:/opt/kafka/bin/../libs/kafka-clients-0.11.0.0.jar:/opt/kafka/bin/../libs/kafka-log4j-appender-0.11.0.0.jar:/opt/kafka/bin/../libs/kafka-streams-0.11.0.0.jar:/opt/kafka/bin/../libs/kafka-streams-examples-0.11.0.0.jar:/opt/kafka/bin/../libs/kafka-tools-0.11.0.0.jar:/opt/kafka/bin/../libs/kafka_2.12-0.11.0.0-sources.jar:/opt/kafka/bin/../libs/kafka_2.12-0.11.0.0-test-sources.jar:/opt/kafka/bin/../libs/kafka_2.12-0.11.0.0.jar:/opt/kafka/bin/../libs/log4j-1.2.17.jar:/opt/kafka/bin/../libs/lz4-1.3.0.jar:/opt/kafka/bin/../libs/maven-artifact-3.5.0.jar:/opt/kafka/bin/../libs/metrics-core-2.2.0.jar:/opt/kafka/bin/../libs/osgi-resource-locator-1.0.1.jar:/opt/kafka/bin/../libs/plexus-utils-3.0.24.jar:/opt/kafka/bin/../libs/reflections-0.9.11.jar:/opt/kafka/bin/../libs/rocksdbjni-5.0.1.jar:/opt/kafka/bin/../libs/scala-library-2.12.2.jar:/opt/kafka/bin/../libs/scala-parser-combinators_2.12-1.0.4.jar:/opt/kafka/bin/../libs/slf4j-api-1.7.25.jar:/opt/kafka/bin/../libs/slf4j-log4j12-1.7.25.jar:/opt/kafka/bin/../libs/snappy-java-1.1.2.6.jar:/opt/kafka/bin/../libs/validation-api-1.1.0.Final.jar:/opt/kafka/bin/../libs/zkclient-0.10.jar:/opt/kafka/bin/../libs/zookeeper-3.4.10.jar (org.apache.zookeeper.ZooKeeper)
[2017-07-11 01:32:09,972] INFO Client environment:java.library.path=/usr/lib/jvm/java-1.8-openjdk/jre/lib/amd64/server:/usr/lib/jvm/java-1.8-openjdk/jre/lib/amd64:/usr/lib/jvm/java-1.8-openjdk/jre/../lib/amd64:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
[2017-07-11 01:32:09,972] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
[2017-07-11 01:32:09,972] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
[2017-07-11 01:32:09,972] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
[2017-07-11 01:32:09,972] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
[2017-07-11 01:32:09,972] INFO Client environment:os.version=4.10.0-26-generic (org.apache.zookeeper.ZooKeeper)
[2017-07-11 01:32:09,972] INFO Client environment:user.name=kafka (org.apache.zookeeper.ZooKeeper)
[2017-07-11 01:32:09,972] INFO Client environment:user.home=/opt/kafka (org.apache.zookeeper.ZooKeeper)
[2017-07-11 01:32:09,972] INFO Client environment:user.dir=/ (org.apache.zookeeper.ZooKeeper)
[2017-07-11 01:32:09,973] INFO Initiating client connection, connectString=m9edd51-zetcd.m9edd51:2181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@75329a49 (org.apache.zookeeper.ZooKeeper)
[2017-07-11 01:32:10,046] INFO Waiting for keeper state SyncConnected (org.I0Itec.zkclient.ZkClient)
[2017-07-11 01:32:10,048] INFO Opening socket connection to server m9edd51-zetcd.m9edd51/172.18.0.3:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2017-07-11 01:32:10,105] INFO Socket connection established to m9edd51-zetcd.m9edd51/172.18.0.3:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2017-07-11 01:32:10,113] WARN Connected to an old server; r-o mode will be unavailable (org.apache.zookeeper.ClientCnxnSocket)
[2017-07-11 01:32:10,113] INFO Session establishment complete on server m9edd51-zetcd.m9edd51/172.18.0.3:2181, sessionid = 0x694d5d2f465b6c07, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2017-07-11 01:32:10,114] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient)
[2017-07-11 01:32:10,147] FATAL Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.apache.kafka.common.config.ConfigException: Zookeeper namespace does not exist
	at kafka.utils.ZkPath.checkNamespace(ZkUtils.scala:1019)
	at kafka.utils.ZkPath.createPersistent(ZkUtils.scala:1034)
	at kafka.utils.ZkUtils.makeSurePersistentPathExists(ZkUtils.scala:456)
	at kafka.server.KafkaServer.$anonfun$initZk$2(KafkaServer.scala:333)
	at kafka.server.KafkaServer.$anonfun$initZk$2$adapted(KafkaServer.scala:327)
	at scala.Option.foreach(Option.scala:257)
	at kafka.server.KafkaServer.initZk(KafkaServer.scala:327)
	at kafka.server.KafkaServer.startup(KafkaServer.scala:191)
	at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
	at kafka.Kafka$.main(Kafka.scala:65)
	at kafka.Kafka.main(Kafka.scala)
[2017-07-11 01:32:10,151] INFO shutting down (kafka.server.KafkaServer)
[2017-07-11 01:32:10,159] INFO shut down completed (kafka.server.KafkaServer)
[2017-07-11 01:32:10,159] FATAL Exiting Kafka. (kafka.server.KafkaServerStartable)
[2017-07-11 01:32:10,161] INFO shutting down (kafka.server.KafkaServer)

If I use zkui to browse the zetcd server, it gives this warning when connecting to zetcd: get(/) failed, err=node not exists.

I think this is because zetcd does not create the root node by default, resulting in kafka not being able to create its chroot node.

If I run zookeeper and access it using zkui, I do not get that error, and there is also a zookeeper child node.

@heyitsanthony heyitsanthony added this to the v0.0.4 milestone Jul 11, 2017
@corpix
Copy link

corpix commented Jul 11, 2017

You should create namespaces manually, kafka can't do it automatically for some reason:

zkctl create '/' ''
zkctl create '/brokers' ''
zkctl create '/brokers/ids' ''
zkctl create '/brokers/topics' ''

@F21
Copy link
Author

F21 commented Jul 11, 2017

@corpix Kafka should automatically create the chroot path (if it does not exist) since 0.8.2.0. See KAFKA-404. I've been using these Kafka images and they have always created the chroot (since 0.9.0.0).

I think in this case, it is because zetcd does not create the / node by default, which is why zkui also complains about the missing root node on a fresh zetcd and etcd setup.

@xiang90
Copy link
Contributor

xiang90 commented Jul 11, 2017

@F21 Send a pr to fix it? thank you!

@xiang90 xiang90 added the bug label Jul 11, 2017
@F21
Copy link
Author

F21 commented Jul 11, 2017

Hey @xiang90 Unfortunately, I am not too familiar with the code and am super swamped at the moment, so I am not sure if I'll have the time to make a PR.

However, I don't think this is easily fixed by just creating the root node. For example, currently, zetcd does not complain if etcd is not accessible. This is useful if we start etcd and zetcd using docker-compose and zetcd starts up much faster than etcd. In this case, zetcd would probably need to implement some sort of retry and timeout policy when attempting to create this root node.

heyitsanthony pushed a commit to heyitsanthony/zetcd that referenced this issue Aug 7, 2017
Consolidates NoNode checking, adds special root Cversion handling.

Fixes etcd-io#75
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

4 participants